url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://www.dsprelated.com/freebooks/pasp/Wave_Digital_Mass_Spring_Oscillator.html
### Wave Digital Mass-SpringOscillator Let's look again at the mass-spring oscillator of §F.3.4, but this time without the driving force (which effectively decouples the mass and spring into separate first-order systems). The physical diagram and equivalent circuit are shown in Fig.F.32 and Fig.F.33, respectively. Note that the mass and spring can be regarded as being in parallel or in series. Under the parallel interpretation, we have the WDF shown in Fig.F.34 and Fig.F.35.F.5 The reflection coefficient can be computed, as usual, from the first alpha parameter: This result, , is just the impedance step over impedance sum'', so no calculation was really necessary. #### Oscillation Frequency From Fig.F.33, we can see that the impedance of the parallel combination of the mass and spring is given by (F.38) (using the product-over-sum rule for combining impedances in parallel). The poles of this impedance are given by the roots of the denominator polynomial in : (F.39) The resonance frequency of the mass-spring oscillator is therefore (F.40) Since the poles are on the axis, there is no damping, as we expect. We can now write reflection coefficient (see Fig.F.35) as We see that dc ( ) corresponds to , and corresponds to . #### DC Analysis of the WD Mass-Spring Oscillator Considering the dc case first (), we see from Fig.F.35 that the state variable will circulate unchanged in the isolated loop on the left. Let's call this value . Then the physical force on the spring is always equal to (F.41) The loop on the right in Fig.F.35 receives and adds to that. Since , we see it is linearly growing in amplitude. For example, if (with ), we obtain , or (F.42) At first, this result might appear to contradict conservation of energy, since the state amplitude seems to be growing without bound. However, the physical force is fortunately better behaved: (F.43) Since the spring and mass are connected in parallel, it must be the true that they are subjected to the same physical force at all times. Comparing Equations (F.41-F.43) verifies this to be the case. #### WD Mass-Spring Oscillator at Half the Sampling Rate Under the bilinear transform, the maps to (half the sampling rate). It is therefore no surprise that given (), inspection of Fig.F.35 reveals that any alternating sequence (sinusoid sampled at half the sampling rate) will circulate unchanged in the loop on the right, which is now isolated. Let denote this alternating sequence. The loop on the left receives and adds to it, i.e., . If we start out with and , we obtain , or However, the physical spring force is well behaved, since As a check, the mass force is found to be which agrees with the spring, as it must. #### Linearly Growing State Variables in WD Mass-Spring Oscillator It may seem disturbing that such a simple, passive, physically rigorous simulation of a mass-spring oscillator should have to make use of state variables which grow without bound for the limiting cases of simple harmonic motion at frequencies zero and half the sampling rate. This is obviously a valid concern in practice as well. However, it is easy to show that this only happens at dc and , and that there is a true degeneracy at these frequencies, even in the physics. For all frequencies in the audio range (e.g., for typical sampling rates), such state variable growth cannot occur. Let's take closer look at this phenomenon, first from a signal processing point of view, and second from a physical point of view. #### A Signal Processing Perspective on Repeated Mass-Spring Poles Going back to the poles of the mass-spring system in Eq.(F.39), we see that, as the imaginary part of the two poles, , approach zero, they come together at to create a repeated pole. The same thing happens at since both poles go to the point at infinity''. It is a well known fact from linear systems theory that two poles at the same point in the plane can correspond to an impulse-response component of the form , in addition to the component produced by a single pole at . In the discrete-time case, a double pole at can give rise to an impulse-response component of the form . This is the fundamental source of the linearly growing internal states of the wave digital sine oscillator at dc and . It is interesting to note, however, that such modes are always unobservable at any physical output such as the mass force or spring force that is not actually linearly growing. #### Physical Perspective on Repeated Poles in the Mass-Spring System In the physical system, dc and infinite frequency are in fact strange cases. In the case of dc, for example, a nonzero constant force implies that the mass is under constant acceleration. It is therefore the case that its velocity is linearly growing. Our simulation predicts this, since, using Eq.(F.43) and Eq.(F.42), The dc term is therefore accompanied by a linearly growing term in the physical mass velocity. It is therefore unavoidable that we have some means of producing an unbounded, linearly growing output variable. #### Mass-Spring Boundedness in Reality To approach the limit of , we must either take the spring constant to zero, or the mass to infinity, or both. In the case of , the constant force must approach zero, and we are left with at most a constant mass velocity in the limit (not a linearly growing one, since there can be no dc force at the limit). When the spring force reaches zero, , so that only zeros will feed into the loop on the right in Fig.F.35, thus avoiding a linearly growing velocity, as demanded by the physics. (A constant velocity is free to circulate in the loop on the right, but the loop on the left must be zeroed out in the limit.) In the case of , the mass becomes unaffected by the spring force, so its final velocity must be zero. Otherwise, the attached spring would keep compressing or stretching forever, and this would take infinite energy. (Another way to arrive at this conclusion is to note that the final kinetic energy of the mass would be .) Since the total energy in an undriven mass-spring oscillator is always constant, the infinite-mass limit must be accompanied by a zero-velocity limit.F.6 This means the mass's state variable in Fig.F.35 must be forced to zero in the limit so that there will be no linearly growing solution at dc. In summary, when two or more system poles approach each other to form a repeated pole, care must be taken to ensure that the limit is approached in a physically meaningful way. In the case of the mass-spring oscillator, for example, any change in the spring constant or mass must be accompanied by the physically appropriate change in the state variables and/or . It is obviously incorrect, for example, to suddenly set in the simulation without simultaneously clearing the spring's state variable , since the force across an infinitely compliant spring can only be zero. Similar remarks apply to repeated poles corresponding to . In this case, the mass and spring basically change places. #### Energy-Preserving Parameter Changes (Mass-Spring Oscillator) If the change in or is deemed to be internal'', that is, involving no external interactions, the appropriate accompanying change in the internal state variables is that which conserves energy. For the mass and its velocity, for example, we must have where denote the mass values before and after the change, respectively, and denote the corresponding velocities. The velocity must therefore be scaled according to since this holds the kinetic energy of the mass constant. Note that the momentum of the mass is changed, however, since If the spring constant is to change from to , the instantaneous spring displacement must satisfy In a velocity-wave simulation, displacement is the integral of velocity. Therefore, the energy-conserving velocity correction is impulsive in this case. #### Exercises in Wave Digital Modeling 1. Comparing digital and analog frequency formulas. This first exercise verifies that the elementary tank circuit'' always resonates at exactly the frequency it should, according to the bilinear transform frequency mapping , where denotes analog frequency'' and denotes digital frequency''. 1. Find the poles of Fig.F.35 in terms of . 2. Show that the resonance frequency is given by where denotes the sampling rate. 3. Recall that the mass-spring oscillator resonates at . Relate these two resonance frequency formulas via the analog-digital frequency map . 4. Show that the trig identity you discovered in this way is true. I.e., show that Previous Section: Mass and Dashpot in Series
2019-12-11 20:30:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9117556810379028, "perplexity": 688.3073867225218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540532624.3/warc/CC-MAIN-20191211184309-20191211212309-00474.warc.gz"}
http://utilia.readthedocs.io/en/latest/modules/functional/logic.html
# logic Module¶ ## Module Description¶ Provides common logic operators as functions. The functions come in two forms: objective and boolean. Objective functions work in a manner similar to the and and or operators, built in to Python, in that return one of the original objects provided as an operand. Boolean functions return boolean values rather than the original objects. The names of objective functions start with o_. The names of all functions in the module end with f, because some of the names would otherwise conflict with Python keywords. The f can be understood to mean function or functional version as opposed to an inline operator. None of the functions in this module perform true short-circuit evaluation like the Python and and or operators do. This is because any expressions given as arguments to a function are evaluated before the function is called. The documentation below and the Examples section provide more details. ## Boolean Functions¶ utilia.functional.logic.andf(*posargs) If no arguments are supplied, returns True. Else, returns the result of calling the all built-in function on the sequence of arguments. Below is a truth table for this function with two arguments: $$p$$ $$q$$ $$p \wedge q$$ False False False False True False True False False True True True Parameters: posargs (objects of any type) – Arbitrary number of positional arguments. boolean utilia.functional.logic.nandf(*posargs) If no arguments are supplied, returns False. Else, returns the negated result of calling the all built-in function on the sequence of arguments. Below is a truth table for this function with two arguments: $$p$$ $$q$$ $$p \uparrow q$$ False False True False True True True False True True True False Parameters: posargs (objects of any type) – Arbitrary number of positional arguments. boolean utilia.functional.logic.orf(*posargs) If no arguments are supplied, returns False. Else, returns the result of calling the any built-in function on the sequence of arguments. Below is a truth table for this function with two arguments: $$p$$ $$q$$ $$p \vee q$$ False False False False True True True False True True True True Parameters: posargs (objects of any type) – Arbitrary number of positional arguments. boolean utilia.functional.logic.norf(*posargs) If no arguments are supplied, returns True. Else, returns the negated result of calling the any built-in function on the sequence of arguments. Below is a truth table for this function with two arguments: $$p$$ $$q$$ $$p \downarrow q$$ False False True False True False True False False True True False Parameters: posargs (objects of any type) – Arbitrary number of positional arguments. boolean utilia.functional.logic.xorf(*posargs) Converts sequence of arguments to booleans. If no arguments are supplied, raises an exception. Else, returns the result of reducing the sequence of arguments with a logical xor function. Below is a truth table for this function with two arguments: $$p$$ $$q$$ $$p \veebar q$$ False False False False True True True False True True True False Parameters: posargs (objects of any type) – Arbitrary number of positional arguments. boolean TypeError, if a required argument is missing. utilia.functional.logic.xnorf(*posargs) Converts sequence of arguments to booleans. If no arguments are supplied, raises an exception. Else, returns the result of reducing the sequence of arguments with a logical xnor function. Below is a truth table for this function with two arguments: $$p$$ $$q$$ $$p \Leftrightarrow q$$ False False True False True False True False False True True True Parameters: posargs (objects of any type) – Arbitrary number of positional arguments. boolean TypeError, if a required argument is missing. utilia.functional.logic.impliesf(*posargs) Converts sequence of arguments to booleans. If no arguments are supplied, raises an exception. Else, returns the result of reducing the sequence of arguments with a logical implies function. Below is a truth table for this function with two arguments: $$p$$ $$q$$ $$p \Rightarrow q$$ False False True False True True True False False True True True Parameters: posargs (objects of any type) – Arbitrary number of positional arguments. boolean TypeError, if a required argument is missing. utilia.functional.logic.nimpliesf(*posargs) Converts sequence of arguments to booleans. If no arguments are supplied, raises an exception. Else, returns the negated result of reducing the sequence of arguments with a logical implies function. Below is a truth table for this function with two arguments: $$p$$ $$q$$ $$p \nRightarrow q$$ False False False False True False True False True True True False Parameters: posargs (objects of any type) – Arbitrary number of positional arguments. boolean TypeError, if a required argument is missing. utilia.functional.logic.cimpliesf(*posargs) Converts sequence of arguments to booleans. If no arguments are supplied, raises an exception. Else, returns the result of reducing the sequence of arguments with a logical converse implies function. Below is a truth table for this function with two arguments: $$p$$ $$q$$ $$p \Leftarrow q$$ False False True False True False True False True True True True Parameters: posargs (objects of any type) – Arbitrary number of positional arguments. boolean TypeError, if a required argument is missing. utilia.functional.logic.cnimpliesf(*posargs) Converts sequence of arguments to booleans. If no arguments are supplied, raises an exception. Else, returns the negated result of reducing the sequence of arguments with a logical converse implies function. Below is a truth table for this function with two arguments: $$p$$ $$q$$ $$p \nLeftarrow q$$ False False False False True True True False False True True False Parameters: posargs (objects of any type) – Arbitrary number of positional arguments. boolean TypeError, if a required argument is missing. ## Objective Functions¶ utilia.functional.logic.o_andf(*posargs) If no arguments are supplied, returns True. Else, returns the result of reducing the sequence of arguments with the logical and operator. Below is a truth-like table for this function with two arguments: $$p$$ $$q$$ $$p \wedge q$$ zeroish zeroish $$p$$ zeroish non-zeroish $$p$$ non-zeroish zeroish $$q$$ non-zeroish non-zeroish $$q$$ Parameters: posargs (objects of any type) – Arbitrary number of positional arguments. object of any type utilia.functional.logic.o_orf(*posargs) If no arguments are supplied, returns False. Else, returns the result of reducing the sequence of arguments with the logical or operator. Below is a truth-like table for this function with two arguments: $$p$$ $$q$$ $$p \vee q$$ zeroish zeroish $$q$$ zeroish non-zeroish $$q$$ non-zeroish zeroish $$p$$ non-zeroish non-zeroish $$p$$ Parameters: posargs (objects of any type) – Arbitrary number of positional arguments. object of any type Todo Create examples.
2018-03-20 22:52:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28151383996009827, "perplexity": 3732.7858117692886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647545.84/warc/CC-MAIN-20180320224824-20180321004824-00655.warc.gz"}
http://devmaster.net/posts/18003/bad-intrusive-reference-counting-implementation-please-criticize
0 101 May 04, 2010 at 06:56 Hi, I know it’s going to hurt, but it’s inevitable in learning good programming practices, isn’t it? Here’s an intrusive reference counting implementation I’ve created after reading a chapter on handle classes in Stroustrup; it surely contains some ankward constructs that I’ve inevitably inserted into it as person who largely dealt with scripting for the last two years. I’d be grateful for suggestions and criticism on how to make it better. Reference counted class: class cRefCounted { public: cRefCounted() { ref_count = 0; } ref_count++; } void RefRemove() { if(--ref_count<=0) delete this; } std::string type() { return typeid(this).name(); } protected: int ref_count; }; Handle class: template<class T> class tHandle { public: explicit tHandle(cRefCounted* obj) { init(obj); } tHandle(const tHandle& handle) { init(handle.obj); } virtual ~tHandle() { obj->RefRemove(); } T& operator*() const { return *obj; } T* operator->() const { return obj; } T* operator=(const tHandle& handle) { if(handle.obj!=obj) init(handle.obj); return obj; } protected: T* obj; void init(cRefCounted* obj) { try { this->obj = static_cast<T*>(obj); } fprintf(stderr, "Not an instance of cRefCounted\n"); this->obj = NULL; } } }; #### 9 Replies 0 101 May 04, 2010 at 11:24 Have you even tested this? It is certainly not going to work like you intended B) Some things that are just wrong: cRefCounted::RefRemove(): it does a ‘delete this’ but cRefCounted doesn’t have a virtual destructor. Therefore, any subclass of cRefCounted will never get destructed appropriately. It is always a good idea to include a virtual destructor if your class is going to be derived from. You may want to make this destructor private if you only want a class to be constructed using new(), rather than a local/global or as a member of another class. You can make it protected to leave this decision to the derived class. cRefCounted::type() function is also flawed. It uses typeid(this), but ‘this’ is a pointer so it will always yield ‘cRefCounted*’. If you want to want an actual runtime type lookup, you should dereference the pointer. Note that this currently won’t work either as cRefCounted doesn’t have a vtable (and therefore no type information), but adding the previously suggested virtual destructor should solve this. tHandle::init(): a static_cast will never fail and never throw a bad_cast. If you want runtime type checking, use dynamic_cast. But this currently fails to compile because dynamic_cast needs type information, which a cRefCounted doesn’t have as it doesn’t have a vtable B). Another thing is that init() does not remove a reference on the previous held object. If I have a tHandle that points to A and I then let it point to B using tHandle::operator=(), A’s refcount will never get decreased. Some suggestions: I would let the tHandle constructor take a T* rather than a cRefCounted*, to make it compile-time type safe. Of course, bad use will cause a runtime exception if you implement the init() using a dynamic_cast, but having a compile-time error will let you catch the error earlier. Also, C++’s design perspective has the overall paradigm that the users choose when they want to pay the price. A dynamic_cast is costlier than a static_cast. Create your own casting functions that implement both static_cast and dynamic_cast on the actual pointers. For example: template<class T1, class T2> tHandle<T1> dynamic_handle_cast(const tHandle<T2> & handle) { return tHandle<T1>(dynamic_cast<T1*>(handle.obj)); } template<class T1, class T2> tHandle<T1> static_handle_cast(const tHandle<T2> & handle) { return tHandle<T1>(static_cast<T1*>(handle.obj)); } You may want to add a templated tHandle conversion constructor to be able to implicitely contruct a tHandle<A> from a tHandle<B> if B derives from A. You can use SFINAE tricks to only include that constructor if B actually is implicitely convertable to A which I don’t want to delve into right now, or you can simply add the ctor and just let the compiler give an error when the actual template instantiation turns out to be incorrect (when B doesn’t actually derive from A), in which case the user gets a vager error message but the net effect is the same. There’s no need to make tHandle::\~tHandle() virtual. tHandle is a class that will probably not ever be derived from. The extra vtable only costs extra performance and memory. Also, keep in mind the current implementation is not thread-safe. This may suite your needs, but it is something to remember :) 0 101 May 04, 2010 at 11:59 Ok - thanks for suggestions. Will try to implement them as soon as I’ve some time. EDIT: Well - it appeared to work :) At the moment I’m using the MinGW compiler which doesn’t tolerate the private destructors, so I made the destructor virtual but public for the time being. 0 101 May 04, 2010 at 12:34 Oh duh, my bad, making a dtor private means the class can’t be overridden B). I’d make it protected, so at least you can’t manually delete a pointer to a cRefCounted 0 101 May 04, 2010 at 13:13 But there was another error in the code that you haven’t spotted - and which haven’t manifested itself in my test application just because the destructor wasn’t declared as virtual. And it was a cardinal sin. The init() method DIDN’T INCREASE THE REFERENCE COUNT. The reference count was DECREASED each time the handle destructor was called, but it was never increased. When I made the destructor of the cRefCounted class virtual, I got runtime error when the program tried to delete a non-existent object. Grrr. EDIT: No, the count was increased in the original version, but it seems I removed it accidentally when rewriting the code. Anyway, only now can I be certain that the garbage is actually collected. Thanks. 0 101 May 05, 2010 at 11:52 Two points: 1: In init(), why do you want to cast the obj pointer from a cRefCounted* into a T* ? It makes no sense, since you only need to access the methods in the base class anyway. Stupid code: void init(cRefCounted* obj) { try { this->obj = static_cast<T*>(obj); } fprintf(stderr, "Not an instance of cRefCounted\n"); this->obj = NULL; } } Better code: void init(cRefCounted* obj) { } 2: Why force all ref-counted objects to inherit your base class? It’s very intrusive, so you can’t ref-count other stuff like STL vectors or textures. A more practical solution would be to store the reference count in the handle class: template<class T> class tHandle { protected: static std::map<T*, int> refCounts; (...) }; 0 101 May 05, 2010 at 12:35 @geon Two points: 1: In init(), why do you want to cast the obj pointer from a cRefCounted* into a T* ? It makes no sense, since you only need to access the methods in the base class anyway. No, he assigns to this->obj as well, which is a T*. 2: Why force all ref-counted objects to inherit your base class? It’s very intrusive, so you can’t ref-count other stuff like STL vectors or textures. Because it’s more efficient? Besides, if you want a non-intrusive refcount, you could just as wel use std::tr1::shared_ptr, which is way more likely to be bug-free. A more practical solution would be to store the reference count in the handle class: template<class T> class tHandle { protected: static std::map<T*, int> refCounts; (...) }; So if you have a B that derives from A, and you have a tHandle<A> and tHandle<B> that point to the same object, then what? Again, you’d better rely on proven technology like shared_ptr. 0 101 May 08, 2010 at 18:05 @.oisyn No, he assigns to this->obj as well, which is a T*. Because it’s more efficient? If the pointer-passing is your bottleneck, you are doing something wrong. Besides, if you want a non-intrusive refcount, you could just as wel use std::tr1::shared_ptr, which is way more likely to be bug-free. Absolutely! But doing it yourself is educational. Just make sure you throw it away when it “works”, and use something more standard, like boost. So if you have a B that derives from A, and you have a tHandle<A> and tHandle<B> that point to the same object, then what? Again, you’d better rely on proven technology like shared_ptr. Good point. 0 101 May 08, 2010 at 18:09 @.oisyn No, he assigns to this->obj as well, which is a T*. Well, obj should obviously also be a cRefCounted*. Or did I miss something? I do consider you a C++ guru, so I’m expecting there’s something I might have misunderstood. :worthy: 0 101 May 09, 2010 at 23:13 @geon If the pointer-passing is your bottleneck, you are doing something wrong. True, if pointer passing were as cheap as int passing. A dynamic_cast, however, is not that cheap. Here’s a quick test (using VS 2010 which supports some of the new C++0x features) #include <iostream> #include <vector> #include <algorithm> #include <cstdlib> #include <ctime> #include <windows.h> struct Base { virtual void foo() = 0; }; struct A : Base { virtual void foo() { } }; struct B : Base { virtual void foo() { } }; struct C : Base { virtual void foo() { } }; int wmain() { std::vector<Base*> v; for (int i = 0; i < 1000000; i++) { int r = std::rand() % 3; v.push_back(r == 0 ? (Base*)new A() : r == 1 ? (Base*)new B() : (Base*)new C()); } v.push_back(new A()); A * ptr; auto c = -std::clock(); for (int i = 0; i < 100; i++) std::for_each(v.begin(), v.end(), [&](Base*:) { ptr = dynamic_cast<A*>(B); }); c += std::clock(); std::cout << c << " " << ptr << std::endl; c = -std::clock(); for (int i = 0; i < 100; i++) std::for_each(v.begin(), v.end(), [&](Base*B) { ptr = static_cast<A*>(B); }); c += std::clock(); std::cout << c << " " << ptr << std::endl; } On my machine, the dynamic_cast takes 4194 milliseconds, while static_cast takes 58 ms. You pay the price any time you copy-construct or copy-assign a tHandle<T>, no matter which T. But of course, a better reason not to implement it like this is because the implicit conversion between unrelated types does not fit in the overall C++ paradigm. If you can’t implicitly convert a A* to a B*, why should you be able to implicitely convert a tHandle<A> to a tHandle<B>? Absolutely! But doing it yourself is educational. Fair enough! So let’s discuss what a non-intrusive implementation would look like :) Your map suggestion obviously wouldn’t work for said reasons. You could adjust it slightly to use a single shared instance of map<void*,int>, and use dynamic_cast to cast the T* to void*, which would result in a pointer to the most derived type. Unfortunately this requires T to have a vtable, which means you can’t have a tHandle<int> for example. The common implementation of std::tr1::shared_ptr allocates a control block upon assignment to the first shared_ptr. This control block holds the reference count, and it’s shared amongst all shared_ptrs that point to this object. However, a shared_ptr constructed or assigned to from a regular T* creates the control block, so in order to keep one control block per object you need to work with shared_ptr (or weak_ptr) everywhere, and never recreate a shared_ptr using the native pointer. An implication of this is that you can’t create a shared_ptr that points to ‘this’ within the class itself. This is what ‘enables_shares_from_this’ is for.
2013-12-11 09:01:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23202529549598694, "perplexity": 4007.2789681861655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164033807/warc/CC-MAIN-20131204133353-00003-ip-10-33-133-15.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/134231/convergence-of-a-n-fracn2nn
# Convergence of $a_n=\frac{n^{2+n}}{n!}$ Convergence of $$a_n=\frac{n^{2+n}}{n!}$$ I used the ratio test and have: $$\lim_{n\to\infty} \frac{(n+1)^{3+n}}{(n+1)!}\cdot \frac{n!}{(n+1)^{2+n}} \\= \lim_{n\to\infty} \frac{(n+1)^{3+n}}{n+1}\cdot \frac{1}{(n+1)^{2+n}}\\= 1$$ Did I do something wrong? Correct answer appears to be $$...=\lim_{n\to\infty}(1+\frac{1}{n})^{n+2}=e$$ - You have $(n+1)^{2+n}$ where you want $n^{2+n}$. – Gerry Myerson Apr 20 '12 at 4:59 ## 1 Answer Correction: $$\lim_{n\to\infty} \frac{(n+1)^{3+n}}{(n+1)!}\cdot \frac{n!}{\color{Blue}n^{2+n}}$$ - Wow ... so many careless mistakes ... in final exam revision ... – Jiew Meng Apr 20 '12 at 6:37
2015-11-27 10:43:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8785318732261658, "perplexity": 1192.4659798783225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398448506.69/warc/CC-MAIN-20151124205408-00196-ip-10-71-132-137.ec2.internal.warc.gz"}
https://zbmath.org/?q=ut%3Apositive+equilibrium+point
## Found 140 Documents (Results 1–100) 100 MathJax ### On semimonotone star matrices and linear complementarity problem. (English)Zbl 1484.90125 MSC:  90C33 15B48 Full Text: ### Deterministic and stochastic dynamics of a modified Leslie-Gower prey-predator system with simplified Holling-type IV scheme. (English)Zbl 1471.92259 MSC:  92D25 60G99 Full Text: Full Text: ### Global stability, periodicity and boundedness behavior of a difference equation. (English)Zbl 1463.39037 MSC:  39A30 39A23 39A22 ### A viscosity iterative algorithm technique for solving a general equilibrium problem system. (English)Zbl 07161708 MSC:  47H09 47H10 47J20 Full Text: ### A viscosity nonlinear midpoint algorithm for nonexpansive semigroup. (English)Zbl 07136677 MSC:  47H09 47H10 47J20 Full Text: Full Text: Full Text: Full Text: ### Asymptotic and boundedness behaviour of a rational difference equation. (English)Zbl 1410.39019 MSC:  39A22 39A30 Full Text: Full Text: ### On generalized positive subdefinite matrices and interior point algorithm. (English)Zbl 1429.90080 Kar, Samarjit (ed.) et al., Operations research and optimization. FOTA 2016, Kolkata, India, November 24–26, 2016. Singapore: Springer. Springer Proc. Math. Stat. 225, 3-16 (2018). MSC:  90C33 90C51 Full Text: ### A critical point theorem in bounded convex sets and localization of Nash-type equilibria of nonvariational systems. (English)Zbl 1388.58008 MSC:  58E05 58E30 49J45 Full Text: Full Text: ### A viscosity iterative algorithm for the optimization problem system. (English)Zbl 1484.47169 MSC:  47J25 47H20 47H09 Full Text: ### Stability in a multi-species chemotaxis model with Lotka-Volterra cooperative source. (Chinese. English summary)Zbl 1399.35237 MSC:  35K57 35B35 92D25 ### Investigation of asymptotic stability of equilibria by localization of the invariant compact sets. (English. Russian original)Zbl 1370.93226 Autom. Remote Control 78, No. 6, 989-1005 (2017); translation from Avtom. Telemekh. 2017, No. 6, 36-56 (2017). MSC:  93D20 93C15 93C05 Full Text: Full Text: ### Behavior of limit cycle bifurcations for a class of quartic Kolmogorov models in a symmetrical vector field. (English)Zbl 1459.34084 MSC:  34C07 34C23 37G15 Full Text: ### Stability analysis on cotton field energy ecology system with pest stress. (Chinese. English summary)Zbl 1374.34201 MSC:  34C60 34D20 92D40 ### An explicit viscosity iterative algorithm for finding fixed points of two noncommutative nonexpansive mappings. (English)Zbl 1381.47061 MSC:  47J25 47H09 Full Text: Full Text: Full Text: Full Text: ### On the solution for a system of two rational difference equations. (English)Zbl 1339.39017 MSC:  39A20 39A22 39A30 Full Text: ### Order-preservation of solution correspondence for parametric generalized variational inequalities on Banach lattices. (English)Zbl 1477.47054 MSC:  47H10 47H07 Full Text: ### Global dynamics of a competitive system of rational difference equations. (English)Zbl 1346.39017 MSC:  39A20 39A22 39A30 Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: ### Mathematical modeling of the interaction of two cells in the proneural cluster of the wing imaginal disk of D. melanogaster. (Russian)Zbl 1349.92033 MSC:  92C15 92C37 Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: ### The general iterative methods for equilibrium problems and fixed point problems of a countable family of nonexpansive mappings in Hilbert spaces. (English)Zbl 1364.47038 MSC:  47J25 47H09 Full Text: ### On the attraction of positive equilibrium point in Solow economic discrete model with Richards population growth. (English)Zbl 1306.91098 MSC:  91B62 92D25 Full Text: Full Text: Full Text: Full Text: ### Strong convergence theorem for a generalized equilibrium problem and system of variational inequalities problem and infinite family of strict pseudo-contractions. (English)Zbl 1390.47018 MSC:  47J25 47H09 47J20 Full Text: ### A general iterative method with strongly positive operators for equilibrium problems and fixed point problems in Hilbert spaces. (English)Zbl 1261.26003 MSC:  26A18 47H10 54C05 ### On the difference equation $$y_{n+1}=\frac {\alpha +y^p_n}{\beta y^p_{n-1}}-\frac {\gamma +y^p_{n-1}}{\beta y^p_n}$$. (English)Zbl 1289.39022 MSC:  39A20 39A30 39A23 Full Text: Full Text: ### New results on Hermitian matrix rank-one decomposition. (English)Zbl 1218.90195 MSC:  90C33 90C51 90C05 Full Text: ### Nonsmooth mechanics and convex optimization. (English)Zbl 1226.90005 Boca Raton, FL: CRC Press (ISBN 978-1-4200-9423-7/hbk; 978-1-4200-9424-4/ebook). xix, 425 p. (2011). Full Text: ### A new interior point method for linear complementarity problem. (English)Zbl 1263.90099 MSC:  90C33 90C51 Full Text: ### Modulus-based matrix splitting iteration methods for linear complementarity problems. (English)Zbl 1240.65181 MSC:  65K05 65F30 90C33 Full Text: ### Global asymptotic stability of a general higher order difference equation. (English)Zbl 1240.39036 MSC:  39A30 39A20 Full Text: Full Text: ### Hybrid pseudoviscosity approximation schemes for equilibrium problems and fixed point problems of infinitely many nonexpansive mappings. (English)Zbl 1295.47075 MSC:  47J25 47H09 Full Text: Full Text: Full Text: ### On the rational recursive sequence $$x_{n+1}=\frac {ax_{n-1}}{b+cx_nx_{n-1}}$$. (English)Zbl 1212.39008 Reviewer: Eduardo Liz (Vigo) MSC:  39A20 39A22 Full Text: ### Univalent positive polynomial maps and the equilibrium state of chemical networks of reversible binding reactions. (English)Zbl 1173.92038 MSC:  92E20 37C25 15B48 74G25 74G30 65H10 92C45 Full Text: ### A new mapping for finding common solutions of equilibrium problems and fixed point problems of finite family of nonexpansive mappings. (English)Zbl 1167.47304 MSC:  47H10 47J25 Full Text: MSC:  34D23 MSC:  39A30 Full Text: ### Boundedness and global stability of a higher-order difference equation. (English)Zbl 1161.39011 MSC:  39A11 39A20 Full Text: ### Winching up heavy loads with a compliant arm: a new local joint controller. (English)Zbl 1172.93381 MSC:  93C85 68T40 92C10 Full Text: ### Topological methods for set-valued nonlinear analysis. (English)Zbl 1141.47033 Hackensack, NJ: World Scientific (ISBN 978-981-270-467-2/hbk). xiv, 612 p. (2008). ### The existence and uniqueness of limit cycle in a predator-prey system with functional response $$x^\frac{1}{n}$$. (Chinese. English summary)Zbl 1150.34470 MSC:  34C60 34C05 92D25 ### Dynamics of a class of higher order difference equations. (English)Zbl 1136.39007 Ruffing, A. (ed.) et al., Communications of the Laufen colloquium on science, Laufen, Austria, April 1–5, 2007. Aachen: Shaker (ISBN 978-3-8322-6739-1/pbk). Berichte aus der Mathematik, 16. 1-18 (2007). MSC:  39A11 39A20 ### Existence of nontrivial solutions of a rational difference equation. (English)Zbl 1131.39009 MSC:  39A11 39A20 Full Text: ### The rule of trajectory structure and global asymptotic stability for a nonlinear difference equation. (English)Zbl 1176.39016 MSC:  39A30 39A23 39A20 Full Text: ### On positive solutions of the difference equation $$x_{n+1}=\frac{x_{n-5}}{1+x_{n-2}x_{n-5}}$$. (English)Zbl 1157.39303 MSC:  39A11 39A20 Full Text: ### Positive equilibrium points of positive discrete-time systems. (English)Zbl 1132.93025 Commault, Christian (ed.) et al., Positive systems. Proceedings of the second multidisciplinary international symposium on positive systems: Theory and applications (POSTA 06), Grenoble, France, August 30 – September 1, 2006. Berlin: Springer (ISBN 3-540-34771-2/pbk). Lecture Notes in Control and Information Sciences 341, 81-88 (2006). MSC:  93C55 15B48 93B60 ### Global solution to a prey-predator model with cross-diffusion. (Chinese. English summary)Zbl 1116.35325 MSC:  35K50 92D25 35K55 35B40 ### Uniform boundedness and stability of solutions to the three-species Lotka-Volterra competition model with self and cross-diffusion. (Chinese. English summary)Zbl 1103.35036 MSC:  35K50 35K57 35B35 92D25 ### A note on global asymptotic stability of a family of rational equations. (English)Zbl 1083.39003 MSC:  39A11 39A20 ### On the dynamics of $$x_{n+1}=(bx_{n-1}^2)(A+Bx_{n-2})^{-1}$$. (English)Zbl 1085.39007 MSC:  39A11 39A20 ### Equilibria of pairs of nonlinear maps associated with cones. (English)Zbl 1061.47054 MSC:  47J10 15B48 47H07 Full Text: ### A mathematical model of competition for two essential resources in the unstirred chemostat. (English)Zbl 1077.35057 MSC:  35K50 35K55 35J65 35K57 Full Text: ### On the positive solutions of the difference equation $$x_{n+1}=\frac {ax_{n-1}} {1+bx_{n}x_{n-1}}$$. (English)Zbl 1063.39003 MSC:  39A11 39A20 Full Text: ### On the positive solutions of the difference equation $$x_{n+1}=(x_{n-1})/(1+x_{n} x_{n-1})$$. (English)Zbl 1050.39005 MSC:  39A11 39A20 Full Text: Full Text: ### Global attractivity in a delay difference equation. (English)Zbl 1047.39015 Reviewer: Fozi Dannan (Doha) MSC:  39A12 39A20 37B25 ### Hopf bifurcation in a chemostat-related model. (English)Zbl 1018.34043 Sunada, Toshikazu (ed.) et al., Proceedings of the third Asian mathematical conference 2000, University of the Philippines, Diliman, Philippines, October 23-27, 2000. Singapore: World Scientific. 12-16 (2002). MSC:  34C23 92E20 37G10 ### Global asymptotic stability for a nonlinear delay difference equation. (English)Zbl 1013.39003 Reviewer: Pavel Rehak (Brno) MSC:  39A11 Full Text: ### Global asymptotic stability of inhomogeneous iterates. (English)Zbl 0995.47027 MSC:  47H07 47H09 47H10 Full Text: ### On the trichotomy character of $$x_{n+1}=(\alpha+\beta x_n+\gamma x_{n-1})/(A+x_n)$$. (English)Zbl 1005.39017 MSC:  39A11 39B05 Full Text: ### Fixed points in ordered Banach spaces and applications to elliptic boundary-value problems. (English)Zbl 0994.47050 Giannessi, Franco (ed.) et al., Equilibrium problems: nonsmooth optimization and variational inequality models. Dordrecht: Kluwer Academic Publishers. Nonconvex Optim. Appl. 58, 25-31 (2001). MSC:  47H07 47H10 35J60 ### On vector equilibrium and vector variational inequality problems. (English)Zbl 0977.49004 Hadjisavvas, Nicolas (ed.) et al., Generalized convexity and generalized monotonicity. Proceedings of the 6th international symposium, Samos, Greece, September 1999. Berlin: Springer. Lect. Notes Econ. Math. Syst. 502, 247-263 (2001). ### Stability of the recursive sequence $$x_{n+1}=(\alpha-\beta x_n)/(\gamma+x_{n-1})$$. (English)Zbl 0990.39009 MSC:  39A11 39B05 Full Text: ### The Poincaré compactification of the MIC-Kepler problem with positive energies. (English)Zbl 0988.70005 MSC:  70F05 70H33 Full Text: ### Non-standard discretization methods for some biological models. (English)Zbl 0989.65143 Mickens, Ronald E. (ed.), Applications of nonstandard finite difference schemes. Papers from the minisymposium on nonstandard finite difference schemes: theory and applications, SIAM annual meeting, Atlanta, GA, USA, 1999. Singapore: World Scientific. 155-180 (2000). ### Kinetic energy and Lyapunov stability of equilibria of natural Lagrangian systems. (English)Zbl 0979.70015 Fiedler, B. (ed.) et al., International conference on differential equations. Proceedings of the conference, Equadiff ’99, Berlin, Germany, August 1-7, 1999. Vol. 2. Singapore: World Scientific. 1155-1157 (2000). MSC:  70H03 70H14 ### On the use of non-canonical quantum statistics. (English)Zbl 0956.82002 MSC:  82B10 82B03 94A17 Full Text: all top 5 all top 5 all top 5 all top 3 all top 3
2022-06-29 16:09:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4563341736793518, "perplexity": 7470.512055302699}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103640328.37/warc/CC-MAIN-20220629150145-20220629180145-00201.warc.gz"}
http://mathhelpforum.com/geometry/30935-measurement-quater-circle.html
1. Measurement of quater circle Hello all, im a n00b, go easy on me. 1st I hope I have this in the correct forum, please move if not I am designing a lightbox, so I can take some pretty cool photographs, and I want to design it properly first, general maths is ok, but when it comes to calculating the length of the circumference of circles i am stumped. See my diagram, my light box is to have a quater circle curve, of 10cm radius. What I need to know, is how long the circumference of that quater circle is. I can then do the rest of the calculations to determin the back wall and the floor. Many many thanks. ILMV aka Ben 2. The circumference of a circle is not a complicated formula. Easy to learn. It's $2{\pi}r$. In your case, r=10, so you have a circumference of $20{\pi}=62.83$ Now a quarter circle is 1/4th this. $\frac{20{\pi}}{4}=5{\pi}=15.71$ 3. Many Thanks Just what I needed, I wil bookmark this for future reference.
2017-12-16 15:18:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8630552291870117, "perplexity": 1027.0083680538644}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948588251.76/warc/CC-MAIN-20171216143011-20171216165011-00392.warc.gz"}
https://www.itl.nist.gov/div898/handbook/pmd/section2/pmd215.htm
4. Process Modeling 4.2. Underlying Assumptions for Process Modeling 4.2.1. What are the typical underlying assumptions in process modeling? ## The data are randomly sampled from the process. Data Must Reflect the Process Since the random variation inherent in the process is critical to obtaining satisfactory results from most modeling methods, it is important that the data reflect that random variation in a representative way. Because of the nearly infinite number of ways non-representative sampling might be done, however, few, if any, statistical methods would ever be able to correct for the effects that would have on the data. Instead, these methods rely on the assumption that the data will be representative of the process. This means that if the variation in the data is not representative of the process, the nature of the deterministic part of the model, described by the function, $$f(\vec{x};\vec{\beta})$$, will be incorrect. This, in turn, is likely to lead to incorrect conclusions being drawn when the model is used to answer scientific or engineering questions about the process. Data Best Reflects the Process Via Unbiased Sampling Given that we can never determine what the actual random errors in a particular data set are, representative samples of data are best obtained by randomly sampling data from the process. In a simple random sample, every response from the population(s) being sampled has an equal chance of being observed. As a result, while it cannot guarantee that each sample will be representative of the process, random sampling does ensure that the act of data collection does not leave behind any biases in the data, on average. This means that most of the time, over repeated samples, the data will be representative of the process. In addition, under random sampling, probability theory can be used to quantify how often particular modeling procedures will be affected by relatively extreme variations in the data, allowing us to control the error rates experienced when answering questions about the process. This Assumption Relatively Controllable Obtaining data is of course something that is actually done by the analyst rather than being a feature of the process itself. This gives the analyst some ability to ensure that this assumption will be valid. Paying careful attention to data collection procedures and employing experimental design principles like randomization of the run order will yield a sample of data that is as close as possible to being perfectly randomly sampled from the process. Section 4.3.3 has additional discussion of some of the principles of good experimental design.
2018-05-23 12:35:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6014246344566345, "perplexity": 281.44454828778385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865651.2/warc/CC-MAIN-20180523121803-20180523141803-00303.warc.gz"}
https://kluedo.ub.uni-kl.de/frontdoor/index/index/year/2010/docId/2183
## On the Large Time Behavior of Diffusions: Results Between Analysis and Probability • Limit theorems constitute a classical and important field in probability theory. In several applications, in particular in demographic or medical contexts, killed Markov processes suggest themselves as models for populations undergoing culling by mortality or other processes. In these situations mathematical research features a general interest in the observable distribution of survivors, which is known as Yaglom limit or quasi-stationary distribution. Previous work often focuses on discrete state spaces, commonly birth-death processes (or with some more flexible localization of the transitions), with killing only on the boundary. The central concerns of this thesis are to describe, for a given class of one dimensional diffusion processes, the quasistationary distributions (if any), and to describe the convergence (or not) of the process conditioned on survival to one of these quasistationary distributions. Rather general diffusion processes on the half-line are considered, where 0 is allowed to be regular or an exit boundary. Very similar techniques are applied in this work in order to derive results on the large time behavior of an exotic measure valued process, which is closely related to so-called point interactions, which have been widely studied in the mathematical physics literature. • Zum Langzeit-Verhalten von Diffusionsprozessen: Resultate zwischen Analysis und Wahrscheinlichkeitstheorie ### Weitere Dienste Verfasserangaben: Martin Kolb urn:nbn:de:hbz:386-kluedo-24821 Heinrich von Weizsäcker Dissertation Englisch 2009 2009 Technische Universität Kaiserslautern Technische Universität Kaiserslautern 29.10.2009 30.03.2010 Diffusion processes; Yaglom limits; limit theorems Fachbereich Mathematik 5 Naturwissenschaften und Mathematik / 510 Mathematik Standard gemäß KLUEDO-Leitlinien vor dem 27.05.2011 $Rev: 13581$
2018-04-22 06:58:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20968744158744812, "perplexity": 1868.5927190102045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945497.22/warc/CC-MAIN-20180422061121-20180422081121-00197.warc.gz"}
https://www.gnurou.org/code/kdevelop-kernel/
# Hacking the Linux Kernel with KDevelop OBSOLETE CONTENT WARNING: This article (and the mentioned plugin) was written about the KDevelop 4 series. KDevelop 5 has been released since, featuring a CLang-based parser that solves most of the mentioned issues. The plugin mentioned in this article will not work with it. I may write an update to both article and plugin in the future, but for now I manage to work with the Generic project manager, so you may do just as well! KDevelop is really an awesome IDE. I’ve been doing a lot of IDE-jumping, being from the old school of Emacs and Vi, but remaining unsatisfied by the code-analysis abilities of these respectable ancestors and the lack of integration of code-browing tools. Sure, Emacs has ECB, but when I finally managed to configure the beast to work correctly on a reasonnably-sized project just to realize it is way too slow to be usable, I just dropped the whole Emacs thing in despair. There is of course CTags and CScope, which work rather well, but you have to manually update your databases and once again you hit a lack of integration and visual aid. What, Eclipse is capable to do flawless code referencing and error detection on Java since 10 years, and we do not have the same capability for a good C/C++ editor? Wait, we do - there is Qt Creator and Eclipse’s CDT, so I shall dismiss them first before going on. Qt Creator is actually pretty good an IDE. It’s fast, relatively lightweight, has good code browsing capabilities and even Vi bindings (that’s the one thing I don’t want to lose from the old days - no Vi bindings, no code). I’d dare say it’s close to perfection if you work with Qt only. Unfortunately for me, Qt is only part of my coding life. I have not been able to get it to work satisfyingly for the kernel - too Qt centric. For Qt, awesome (although KDevelop is at least as good IMHO). But for the kernel? Forget it. CDT tries to bring the same level of integration for C/C++ to Eclipse as the JDT does. Well, this could be quite good too - and some people actually seem to use it to work on the kernel, but unfortunately after trying the same procedure and seeing that I have been using 1.5Gb of my memory to see red underlining all over the code, I was quite disappointed. Not to mention that using Eclipse on my quad-core, 8Gb dev machine still feels like driving a truck. Meet KDevelop. Wait, hasn’t this thing been in development for something like 10 years too? If you tried it in the KDE2/3 days, you may have been disappointed - I surely have been. Well, it may have taken time to come there, but let me tell you that as of today KDevelop is seriously underrated. It’s by no way perfect (yet), but it’s definitely the IDE that is the closest to my wishes: • Of course, it handles KDE and Qt code gracefully, as well as regular C/C++ projects. All my user-space needs covered. • It has Vi key bindings! No one considers doing serious coding without basic Vi bindings, does he? • Pretty good Git integration, including a visual ‘git blame’ that I love to use. • The project files format is simple and they can easily be edited to twiddle your settings without depending on the GUI and its limitations. • It does a rather good job at handling kernel code. There are still limitations - see below - but hopefully we will address them soon. What is really nice about KDevelop with the kernel is that despite the huge quantity of code, navigation is fast and smooth (despite the memory usage that remains quite high. Good excuse to buy these 8GB of RAM). The Quick Open bar lets you instantly jump to a file, structure definition or function. The parser is not dumb like most of those I have seen (hi, CTags!) and does not rely on simple text matching to find where a variable is used - instead it really parses the code and knows what it means. Leaving the mouse on a label gives you all the information you possibly want to know about it, including the places where it is accessed. This is invaluable when exploring new code, and KDevelop does the best job at it. There is also a huge potential for code analysis, and I am quite confident we will also see things like function pointers resolving later (as the kernel is full of it, that would be a killer). So, no more waiting - the next section will explain how to set up KDevelop to hack your kernel, and then we will review the limitations and shortcomings, which will hopefully be addressed in future versions. ## Setting up a kernel project We assume that you already git cloned the kernel source. Start KDevelop (version 4.2 or later). The first thing to do is to go to Settings -> Configure KDevelop, then Background Parser and disable it. We will reenable it later, but you don’t want it to go through Linux’s 15 millions lines of code at this point. Create the project using Project -> Open/Import Project. In the file selection dialog, go to the root of the Linux source and do not select any file. Click Next. Put the name you like, and select the Generic Project Manager. Click Finish. After a few seconds you should see your project appearing in the Projects panel. You can already open source files, but with the parser disabled it will not be much more useful than your averate text editor. The next thing we want now is to re-enable the code parser so that we can navigate through the code. But since Linux code base is so big, we also want to limit it to the part that is useful for us. Also, we need to provide it with the right include paths. Right-click the project in the Projects pane and select Open Configuration. There you can provide patterns for files to include and exclude from the project. Remove the * entry in the includes, and add the patterns corresponding to the parts of the kernel you are interested in. Most likely, you will at least want /kernel/*, /lib/* and /include/* since they cover the basics. Having /Documentation/* is also a good idea since it will allow you to lookup for documentation files through the Quick Open combo. What you want to be more selective about are the arch/ and drivers/ directories. Only include the arch subdirectory that corresponds to the architecture you plan to compile the kernel for. All the same, since driver code is so huge, you want to only include the sub-directories of drivers/ that you are interested in. The following screenshot gives you a good starting set of directories for working on ARM. After clicking Ok, reenable the background parser. It will be working for a while, but should still be bearable (don’t forget to set the number of jobs to the number of cores in your machine). You can now open a C file - however you will notice that most includes are not resolved and underlined with red. This is because KDevelop by default looks for include files in /usr/include, which is not really suitable for kernel work. To fix this, you need to give it a list of include directories. Leave your mouse cursor on one of these unresolved includes, and click the Solve button, then Add custom include path. The following dialog can be a little confusing, so let’s first explain how KDevelop handles custom include paths. Starting from the directory containing the file, and going up to the root directory, KDevelop searches for files names .kdev_include_paths that contains lists of include directories. The format is very simple: one line, one directory, given by its absolute path. This dialog just lets you create these files. The storage directory is where the .kdev_include_paths file will be created. Since we want it to apply to all the kernel, specify the root of your kernel source. Skip the Automatic resolution and just add absolute paths to the include/, arch/yourarch/include, and arch/yourarch/mach-yourmach/include, as shown by the following screen: Alternatively, you can also add these paths manually into a file named .kdev_include_paths. This may actually be simpler. Hit Ok (or hit F5 to reload your open file if you edited manually) and watch the background parser resolving your include files correctly. You can now use all the nice coding and code-browsing features of KDevelop on the kernel. Happy hacking! ## Shortcomings There are a small list of ennoying things at the moment. I hope to fix some (or better, all) of them in the future. ### Frequently need to run “make mrproper” This is an interesting behavior, and I don’t really know how to fix this. Once in a while, when you try to compile the kernel outside of the source tree, you will be asked to run “make mrproper”. The reason for that is that KDevelop seems to create an include/config directory inside the kernel sources. Running the command indeed cleans that directory (manually deleting it also works) and allows you to continue. Question is, why does this directory get created in the first place? There is one inside the build directory, but for some reason KDevelop seems to want to replicate it into the sources. Weird. ### Kernel configuration include file not parsed Once you have configured your kernels, a C include file is created in include/generated/autoconf.h that contains the zillions of macro definitions that allow you to enable/disable some features. These macros are extremely important for good code parsing. The file is automatically included at the beginning of every C file by passing the -include option to gcc, so it is not explicitly included from the code. Because of this, KDevelop cannot see these macros definitions and acts as if they did not exist - which is sometimes misleading. For instance, it will ignore all runtime PM code since it is only processed if the corresponding macro is defined. Fixing this should only require minor twiddling of the C parser, and an option to automatically process some header files, similar to the include directories feature. It could even be saved in the same file to limit the automatic include to a subset of the project. Definitely the next on my tohack list. ### __KERNEL__ not defined All the same, many include files are shared between kernel and user space, and kernel-only parts are processed if the __KERNEL__ macro is defined. This macro is defined via the -D option of GCC, so once again KDevelop misses it. However, it should not be very difficult to add a macro definitions panel in the project configuration, like almost every IDE possesses. ### C99 Initializers This is one of the rare missing things in the otherwise great KDevelop C parser. The kernel is full of C99 initializations that look like this: static struct zorro_driver cirrusfb_zorro_driver = { .name = "cirrusfb", .id_table = cirrusfb_zorro_table, .probe = cirrusfb_zorro_register, .remove = __devexit_p(cirrusfb_zorro_unregister), }; KDevelop raises a parse error when it meets these, and of course the members are not available for code browsing and reference. Here the parser needs to be updated to support this form of initializer. ## Conclusion If you don’t mind the initial project configuration and running into a few rough edges once in a while, kernel hacking with KDevelop is definitely something doable, and I’d even daresay enjoyable. As far as I’m concerned it is what gives me the most comfort. For beginners, the code browsing, online macro expanding, online documentation and quick open are invaluable features that largely counterbalance the few lacks. Hopefully I will find time to address them, but the best thing to do would be to have a kernel project type within KDevelop that could also be used to configure and compile the kernel and would take care of all this transparently.
2020-08-09 22:56:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5191552042961121, "perplexity": 1528.9016814391332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738595.30/warc/CC-MAIN-20200809222112-20200810012112-00529.warc.gz"}
https://www.vedantu.com/question-answer/solution-of-a-differential-equation-xdy-ydx-0-class-10-physics-cbse-609e9ffd3abc38535777cfbf
Courses Courses for Kids Free study material Free LIVE classes More # Solution of a differential equation $xdy - ydx = 0$ representsA. A rectangular hyperbolaB. Parabola whose vertex is at originC. Straight line passing through originD. A circle whose center is at origin Last updated date: 22nd Mar 2023 Total views: 206.1k Views today: 2.83k Verified 206.1k+ views Hint:Given is the solution of a differential equation. We need to find that equation. So we will separate the variable or function with its respective derivative term.Then we will integrate the equation. After integrating we will definitely get the perfect answer. Given is the solution of a differential equation, $xdy - ydx = 0$ Separating the variables, $xdy = ydx$ Now separating the respective variables, $\dfrac{{dy}}{y} = \dfrac{{dx}}{x}$ Integrating both sides we get, $\int {\dfrac{{dy}}{y} = \int {\dfrac{{dx}}{x}} }$ Taking the integral we get, $\log y = \log x + \log C$ Removing the log on both the sides, $\therefore y = xC$ This is the respective equation. Thus we know that is the equation of a straight line that passes through the origin. Thus option C is the correct answer. Note:In order to get the correct answer as we did the process above. But we should know that equation of all other options so that it becomes easy to tick the correct answer. Since all the geometrical shapes are having either vertex or center on the origin we should not miss a single step. -Rectangular hyperbola $xy = {C^2}$ -Parabola with vertex at origin ${y^2} = 4ax$ -Circle with centre is at origin ${x^2} + {y^2} = {r^2}$
2023-03-29 10:48:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8291208148002625, "perplexity": 785.6410701507627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948965.80/warc/CC-MAIN-20230329085436-20230329115436-00557.warc.gz"}
https://gamedev.stackexchange.com/questions/87620/create-player-rankings
# Create player rankings? I recently created a tournament system that will soon lead into player rankings. Basically, after players are done with the tournament, they are given a rank based on how they did in the tournament. So the person who won the tournament will have the most points and be ranked #1, while the second will have the second most points and be ranked #2, and so on... However, after they are ranked in the new rankings, they can challenge other members and have a way to play other members and change their rank. So basically (using a ranking system), if Player A who is ranked #2 beats Player B who is ranked #1, Player A will now become #1. I've also decided that if a player wants to compete in the rankings but was not present during the tournament, they can sign up after the tournament, and will be given the lowest possible rank with the lowest score (but they have a chance to move up). So now, I am wanting to know which way should I go about planning this. When I convert the players from tournament to match rankings, I have to identify them with points. I decided I can do this 2 ways: 1 1000 2 900 3 800 4 700 5 600 6 500 7 400 8 300 9 200 10 100 Or I can have it set up in a exponential type of system where the points will be greater in between the players that are ranked higher. After looking on the internet I've decided it would be wise to use ELO to give players their new rank after they players have matched against each other.. I went about it on this page: http://www.lifewithalacrity.com/2006/01/ranking_systems.html So if I go about this my first way, and lets say I have rank #10 facing rank #1. My formula is: R' = R + K * (S - E) and the rating of #10 only has 100 points where #1 has 1,000. So after doing the math rank #10's expected value of beating #1 is: 1 / [ 1 + 10 ^ ( [1000 - 100] / 400) ] = 0.55% So 100 + 32 * (1 - 0.52) = 115.36 The problem I have with ELO is it makes no sense. After A rank such as #10 beats #1, he should not only gain something as low as 15 points. I'm not sure if i'm doing the math wrong, or if I'm splitting up the points wrong. Or maybe I shouldn't use ELO at all? • If a very low-ranked person beats a high-ranked person, I think ELO assumes it was a fluke. This whole issue of ranking systems that only use win/loss as an input is complex. I recommend reading this: en.wikipedia.org/wiki/Elo_rating_system – Almo Nov 21 '14 at 20:59 You can also tweak the variables in the Elo formula to make the outcome a single match have more or less impact on a player's score. "K" in that formula is the most a player's score can change in a single match. That is why you are seeing such a small change in score for a single match, the most the score could possibly change in your formula is 32. Also you are doing the math wrong, the #10 ranked player should be: 100 + 32 * (1 - 0.0055) = 131.824; they should get almost all of the 32 points up for grabs in that match because the expectation they would win was so low.
2020-10-21 12:55:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5318345427513123, "perplexity": 673.6472607824975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107876500.43/warc/CC-MAIN-20201021122208-20201021152208-00480.warc.gz"}
http://hongrie-implant-dentaire.fr/sheet/3gt1005.html
# 60cm Triangle Circle Rectangle Aluminum Road 60cm Triangle Circle Rectangle Aluminum Road Signs With Clamps Reflective Traffic Signs , Find Complete Details about 60cm Triangle Circle Rectangle Aluminum Road Signs With Clamps Reflective Traffic Signs,Aluminum Road Signs,Triangle Traffic Signs,Reflective Material Road Sign from Traffic Signs Supplier or Manufacturer-Hangzhou Eaglerd Traffic Industry And Trade Co., Ltd. 60cm Triangle circle rectangle Aluminum Road signs Metal traffic sign board warning road sign . US $5.00-$30.00 / Piece 300 Pieces (Min. Order) 14 YRS . Hangzhou Eaglerd Traffic Industry And Trade Co., Ltd. (3) 87.2%. Contact Supplier ... K&S Precision Metals 83062 Round Aluminum Tube, 5/16" OD x 0.049" Wall Thickness x 12" Length, 0.3125 in OD, 1 pc, Made in USA 4.4 out of 5 stars 16 Misc. China Triangle Aluminum Warning Road Signs, Find details about China Metal Traffic Sign, Warning Road Sign from Triangle Aluminum Warning Road Signs - … If a triangle has three unequal sides, it is called a scalene triangle. In this triangle all three sides are of unequal lengths. $$XZ ≠ XZ ≠ YZ$$ Classification of triangles by angles Right angled triangle . If any angle of a triangle is $$90^\circ,$$ the triangle is called a right-angled triangle or a right triangle… Aluminum Rectangle Tube, is an extruded aluminum tube that is widely used for all types of fabrication projects where lightweight and corrosion resistance is a primary concern - frame work, support columns, gates, fencing, handrails, etc. Rectangle Aluminum Tube has square corners inside and outside, with no weld seam.Available in both 6061-T6 and 6063-T52, with the 6063 square tube being more ... Graniterock’s series of on-line Construction Calculators will help you estimate the amount of material that you will need for your construction jobs. The calculators are grouped into these categories: Unit Conversion, Concrete Materials, Aggregate, Road Materials, Masonry Estimator, Landscape Supplies, and … The diameter of given circle = 56 cm. Circumference of circle = 2πr = ( 2 x 22/7 x 28 ) cm = 176 cm. Hence, the circumference of the given circle is 176 cm . Perimeter And Area Questions For Class 6 – Radius of a Circle. Question 14. Find the radius of the circle whose circumference is 132 cm? Explanation. Let, the radius of the given circle ... This free area calculator determines the area of a number of common shapes using both metric units and US customary units of length, including rectangle, triangle, trapezoid, circle, sector, ellipse, and parallelogram. Also, explore the surface area or volume calculators, as well as hundreds of other math, finance, fitness, and health calculators. MessageType Traffic Control Signs Message or Graphic Message Only Legend Wrong Way Graphic Type None Reflectivity Engineer Grade Material Aluminum Thickness (Decimal Inch) 0.0800 Coating Sign Muscle Mounting Post Number of Printed Sides 1 Color White on Red Height (Decimal Inch) 18 Width (Decimal Inch) 24 Shape Rectangle Special Features UV Resistant; Graffiti Proof; Chemical … A circle of radius 2 cm is cut out from a square piece of an aluminum sheet of side 6 cm. What is the area of the left over aluminum sheet? (Take π = 3.14) Answer: Radius of circle = 2 cm and side of aluminum square sheet = 6 cm. According to question, Area of aluminum sheet left = Total area of aluminum sheet – Area of circle = side * side ... Rectangle Bars are a long and rectangular-shaped metal bars are used in a wide range of structural and architectural applications. Flat bar is available in Aluminum, Stainless Steel, Hot Rolled and more. Hot Rolled Flat Bar can use in situations where precise shapes and tolerances are not required. MessageType Pedestrian Crossing Signs Message or Graphic Message Only Legend Caution - Slow Down - Pedestrian Traffic Graphic Type None Reflectivity NonReflective Material Aluminum Thickness (Decimal Inch) 0.0400 Coating No Coating Mounting Post Number of Printed Sides 1 Color Black on Yellow Height (Decimal Inch) 10 Width (Decimal Inch) 14 Shape Rectangle PSC Code 9905 Polished and ready to use, GoodyBeads brings you Metal Blanks at an affordable price. Choose from a large variety of sizes, shapes and metal styles for your jewelry design. You can also often get one of these from another; for example, if you know the formula for the area of a circle, you may be able to figure out that the volume of a cylinder is just the area of the associated circle(s) at the end times the cylinder's height. ... rectangle, or triangle. Add the measurements to get the value of the perimeter (P ... We also export aluminum plate,aluminum signs. aluminum plate with different models, different thickness, different shapes (circle, triangle, rectangle) ,can be made according to customer requirements, which mainly used in industrial manufacturing. our aluminum signs which are mainly used for road indication, with strong technical force ... Jun 11, 2018 · A circle is inscribed in an equilateral triangle ABC is side 12 cm, touching its sides (see figure). Find the radius of the inscribed circle and the area of the shaded part. Solution: Each side of the equilateral triangle ABC (a) = 12 cm. Question 25. In the figure, an equilateral triangle ABC of side 6 cm has been inscribed in a circle. Rd_sharma_(2018) Solutions for Class 10 Math Chapter 13 Areas Related To Circles are provided here with simple step-by-step explanations. These solutions for Areas Related To Circles are extremely popular among Class 10 students for Math Areas Related To Circles Solutions come handy for quickly completing your homework and preparing for exams. If the border is 5 cm thick, the area of the screen is Solution: The length of the screen is 80 - (5 + 5) = 80 - 10 = 70 cm, the width of the screen is 50 - (5 + 5) = 50 - 10 = 40 cm The cylinder surface area is the height times the perimeter of the circle base, plus the areas of the two bases, all added together. Surface area of a sphere The surface area formula for a sphere is 4 x π x (diameter / 2) 2 , where (diameter / 2) is the radius of the sphere (d = … Apr 22, 2010 · A PowerPoint presentation of Perimeter & Area for 4th and 5th grade. To be noted, the base and height of the triangle are perpendicular to each other. The unit of area is measured in square units (m 2, cm 2). Example: What is the area of a triangle with base b = 3 cm and height h = 4 cm? Using the formula, Area of a Triangle, A = 1/2 × b × h = 1/2 × 4 cm × 3 cm = 2 cm × 3 cm = 6 cm 2 The length and width of a rectangle are 11.5 cm and 8.8 cm respectively. Find its area and perimeter. If the height of a triangle is 19cm and its base length is 12cm. Find the area. The perimeter of an equilateral triangle is 21cm. Find its area. A parallelogram has base length of 16cm and the distance between the base and its opposite side is ... The area of this orange area outside of the triangle and inside of the circle. Well, the area of our circle is 4 pi. And from that we subtract the area of the triangle, 3 square roots of 3. And we are done. This is our answer. This is the area of this orange region right there. Anyway, hopefully you found that fun. Well, this is a rectangle. So we know if this length is 10, then this length must also be 10. And if this width is 6, then this width must be 6 as well. And now we can figure out the perimeter. It's 10 plus 10 plus 6 plus 6, which is 32. So let me write that down. The perimeter of our original yellow rectangle is … Jul 30, 2018 · 6. In Fig. 15.79, OCDE is a rectangle inscribed in a quadrant of a circle of radius 10 cm. If OE = 2√5, find the area of the rectangle. Solution Isosceles Triangle: A triangle with two sides of equal length is an isosceles triangle. Scalene Triangle: A scalene triangle is the one with all unequal sides. On the basis of the measure of angles, triangles are of following types: Acute-angled Triangle: A triangle in which each angle is acute (less than 90°) is an acute-angled triangle. We are leading manufacturer for all types Road Sign boards as per standard specification of IRC 67-2001. Avaiable in 60cm & 90 cm Triangle and circle using … There are several different shape options to choose from for your aluminum sign, including Square/Rectangle, Rounded Corners (¼” or 1” radius), Circle/Oval, Custom or Custom with Border. The Custom shape option (previously known, or sometimes referred to, as Contour Cut) allows you to either upload a custom shape to our design tool or have ... We have eight free printable black & white and colored shape sets, including basic geometric shapes and fun shapes, that are great to use for crafts and various early math and shapes-themed learning activities. Our aluminum sign blanks are chemically treated to meet ASTM B-449 specifications as a pre-treatment for paint or reflective sheeting. Sign blanks may be ordered in a variety of sizes. Aluminum signs are available in .063, .080, and .125 gauge aluminum*. We carry .080 gauge aluminum in stock at all times and made readily available to order on our online store. A circle has a circumference of $17.27$ meters. Find the diameter. 5.5 m. A circle has a circumference of $80.07$ centimeters. Find the diameter. In the following exercises, find the radius of the circle with given circumference. A circle has a circumference of $150.72$ feet. 24 ft May 26, 2020 · To do this problem it’s easiest to assume that the circle (and hence the rectangle) is centered at the origin of a standard $$xy$$ axis system. Doing this we know that the equation of the circle will be ${x^2} + {y^2} = 16$ and that the right upper corner of the rectangle will have the coordinates $$\left( {x,y} \right)$$. Metal Weight Calculator. Choose the alloy type, shape, and number of pieces and enter the dimensions to calculate the total weight. Nationwide Warehouse and Online Order Pickup Locations. 1-2 day ground shipping to 99% of U.S; Metal & Plastic Materials Shop Online. No Minimums. your solution is that the length and width of the first rectangle is 12 cm for the length and 4 cm for the width. breadth and width mean the same thing in this problem. Question 1166545 : A man mows his 25 foot by 200 foot rectangular lawn in a spiral pattern starting from the outside edge. Area and Volume Formula for geometrical figures - square, rectangle, triangle, polygon, circle, ellipse, trapezoid, cube, sphere, cylinder and cone. Perimeter using Grids. Introduce the concept of finding the perimeter using grids with this unit of worksheets. Find the perimeter of the shapes on the grids with fixed and varying scales, draw shapes on grids using the given perimeter, compare the perimeter of shapes on grids and match them as well. Aluminum. Aluminum Bar. 6061 Square; 6061 Rectangle; 6061 Hex; 6061 Round; 6063 Rectangle; Aluminum Sheet & Plate. 3003, 5052, 6061 & Cast Tooling; Common Alloy Perforated; Common Alloy Expanded; 3105 Painted; 3003 & 6061 Diamond; Aluminum Tube. 6061 and 6063 Square; 6063 and 6061 Rectangular; 6061 and 6063 Round; Aluminum Pipe. 6061 Aluminum ... You now have a right triangle and a rectangle and can finish the problem with the Pythagorean Theorem and the simple fact that opposite sides of a rectangle are congruent. The triangle’s hypotenuse is made up of the radius of circle A, the segment between the circles, and the radius of circle Z. Their lengths add up to 4 + 8 + 14 = 26. Serving the community for over 35 years, Garden Grove Supplies is a proudly South Australian family owned and operated business, employing over 150 Located north of Adelaide in the leafy suburb of Golden Grove, and expanding over 10 acres, Garden Grove holds SA’s most extensive range of landscaping supplies. Jun 14, 2016 · Custom Cut Metal Box Or Tray - Just The Way You Need It. Custom cut metal boxes or trays are popular shapes our customers like us to form sheet metal into. Be the metal aluminum, stainless steel or carbon steel sheets or plates.. The great thing is that there is practically no limits to how a metal box or tray should look like. Small metal box or large metal box, made out of aluminum ... The city can choose to buy the posts shaped like cylinders or the posts shaped like rectangular prisms. The cylindrical posts have a hollow core, with aluminum 2.5 cm thick, and an outer diameter of 53.4 cm. The rectangular-prism posts have a hollow core, with aluminum 2.5 cm thick, and a square base that measures 40 cm on each side. Deals Clearance Weekly Ad Top Deals RedCard Exclusives Target Circle Offers. Registry. ... include out of stock decorative wall mirrors mirror sets Rectangle Round Square Novelty Oval hexagon Octagon Sunburst Diamond Rectangle With Rounded Top Triangle $25 –$50 $50 –$100 $100 –$150 $150 –$200 less than 20 ... Rectangle With Rounded Top. Explanation: . The height of an equilateral triangle, shown by the dotted line, is also one of the legs of a right triangle: The hypotenuse is x, the length of each side in this equilateral triangle, and then the other leg is half of that, 0.5x. Given that , area= 72cm and length = 8 cm. Let the breadth of a rectangle be a. As we now, Area of a rectangle = length × breadth. Accordingly, A = l × b. 72 = 8 × a. 72= 8a. a = 72/9. a= 8 cm. Hence, the breadth of a rectangle 8 cm. 3. Calculate the Area of a Cone … The equivalent diameter of a 300 mm x 500 mm rectangular duct can be calculated as. d e = 1.30 ((300 mm) (500 mm)) 0.625 / ((300 mm) + (500 mm)) 0.25 = 420 mm. Rectangular to Equivalent Circular Duct Calculator. The calculator below is based on formula (1). The … the number of rectangular cuts possible from a steel or plywood plate ; Input the large rectangle inside dimensions - and the outside dimensions of the smaller rectangles. Default values are for 0.5 x 0.8 inch rectangle inside a 10 inch x 10 inch square. The calculator is generic and all units can be used - as long as the same units are used ... Jan 31, 2020 · The program should allow the user to select the shape (C or c for circle, R or r for Rectangle, T or t for Triangle) whose area he or she . math. an equilateral triangle of side 10cm is inscribed in a circle. find the radius of the circle? show the solution . math. An equilateral triangle is inscribed in a circle. offers 1,472 triangle road signs products. About 29% of these are Traffic Signs, 0% are Other Roadway Products, and 0% are Traffic Warning Products. A wide variety of triangle road signs options are available to you, such as material, applicable industries, and showroom location. Jul 31, 2020 · A rectangular sheet of paper of length 10 cm and breadth 24cm is rolled end to end to form a right circular cylinder of height 8 cm. find the volume of the cylinder. Solution: When you roll a rectangle sheet to a cylinder shape, the base forms a circle. Solution: = 4 cm + 4 cm + 4 cm = 12 cm A square and an equilateral triangle are both examples of regular polygons . Another method for finding the perimeter of a regular polygon is to multiply the number of sides by the length of one side. correct my code thanks, that's a lot of code, Consider going through one problem at a time.Draw the rectangle, once that works, try drawing the square, once done, draw the triangle. If stuck with any, post a minimal reproducible example for that part. But I will indeed correct your code: 1) Don't use null-layout instead use a layout manager as Swing was designed to work with different OS ... Find the area of a triangle, whose sides are : (i) 10 cm, 24 cm and 26 cm (ii) 18 mm, 24 mm and 30 mm (iii) 21 m, 28 m and 35 m Solution: Question 2. Two sides of a triangle are 6 cm and 8 cm. If height of the triangle corresponding to 6 cm side is 4 cm ; find : (i) area of the triangle (ii) height of the triangle corresponding to 8 cm side ... Round, One Piece Seamless Aluminum Candle Mold: 2" 3.5" OPS001: Round, One Piece Seamless Aluminum Candle Mold: 2" 6.5" OPS002: Round, One Piece Seamless Aluminum Candle Mold: 3" 3.5" OPS004: Round, One Piece Seamless Aluminum Candle Mold: 3" 6.5" OPS005
2021-10-21 01:23:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3000374436378479, "perplexity": 1531.808038078424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585380.70/warc/CC-MAIN-20211021005314-20211021035314-00402.warc.gz"}
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=POCPA9_2016_v17n4_1861
Improving the Accuracy of Early Diagnosis of Thyroid Nodule Type Based on the SCAD Method Title & Authors Improving the Accuracy of Early Diagnosis of Thyroid Nodule Type Based on the SCAD Method Abstract Although early diagnosis of thyroid nodule type is very important, the diagnostic accuracy of standard tests is a challenging issue. We here aimed to find an optimal combination of factors to improve diagnostic accuracy for distinguishing malignant from benign thyroid nodules before surgery. In a prospective study from 2008 to 2012, 345 patients referred for thyroidectomy were enrolled. The sample size was split into a training set and testing set as a ratio of 7:3. The former was used for estimation and variable selection and obtaining a linear combination of factors. We utilized smoothly clipped absolute deviation (SCAD) logistic regression to achieve the sparse optimal combination of factors. To evaluate the performance of the estimated model in the testing set, a receiver operating characteristic (ROC) curve was utilized. The mean age of the examined patients (66 male and 279 female) was $\small{40.9{\pm}13.4years}$ (range 15- 90 years). Some 54.8% of the patients (24.3% male and 75.7% female) had benign and 45.2% (14% male and 86% female) malignant thyroid nodules. In addition to maximum diameters of nodules and lobes, their volumes were considered as related factors for malignancy prediction (a total of 16 factors). However, the SCAD method estimated the coefficients of 8 factors to be zero and eliminated them from the model. Hence a sparse model which combined the effects of 8 factors to distinguish malignant from benign thyroid nodules was generated. An optimal cut off point of the ROC curve for our estimated model was obtained (p Keywords Language English Cited by 1. Important Neighbors: A Novel Approach to Binary Classification in High Dimensional Data, BioMed Research International, 2017, 2017, 2314-6141, 1 References 1. Fan J, Li R (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. J Am Statistical Associat, 96, 1348-60. 2. Finley DJ, Zhu B, Barden CB, et al (2004). Discrimination of benign and malignant thyroid nodules by molecular profiling. Ann Surg, 240, 425. 3. Ghosh D, Chinnaiyan AM (2005). Classification and selection of biomarkers in genomic data using LASSO. Bio Med Res Int, 2005, 147-54. 4. Hong Y, Liu X, Li Z, et al (2009). Real-time ultrasound elastography in the differential diagnosis of benign and malignant thyroid nodules. J Ultrasound Med, 28, 861-7. 5. Lin H, Zhou L, Peng H, et al (2011). Selection and combination of biomarkers using ROC method for disease classification and prediction. Canadian J Statistics, 39, 324-43. 6. Ma S, Huang J (2008). Penalized feature selection and classification in bioinformatics. Briefings Bioinformatics, 9, 392-403. 7. Mansiaux Y, Carrat F (2014). Detection of independent associations in a large epidemiologic dataset: a comparison of random forests, boosted regression trees, conventional and penalized logistic regression for identifying independent factors associated with H1N1pdm influenza infections. BMC Med Res Methodol, 14, 99. 8. Mendonca LF, Vieira SM, Sousa J (2007). Decision tree search methods in fuzzy modeling and classification. Int J Approximate Reason, 44, 106-23. 9. Pourahmad S, Azad M, Paydar S (2015). Diagnosis of malignancy in thyroid tumors by multi-layer perceptron neural networks with different batch learning algorithms. Global J Health Sci, 7, 46. 10. Shahraki H, Salehi A, Zare N (2014). Survival prognostic factors of male breast cancer in Southern Iran: a LASSO-Cox regression approach. Asian Pac J Cancer Prev, 16, 6773-7. 11. Talhaa M, Al-Elaiwi A (2013). Enhancement and classification of mammographic images for breast cancer diagnosis using statistical algorithms. Life Sci J, 10, 764-772. 12. Yan F-R, Lin J-G, Liu Y (2011). Sparse logistic regression for diagnosis of liver fibrosis in rat by using SCAD-penalized likelihood. Bio Med Res Int, 8, 875309. 13. Yang F, Wang H-z, Mi H, et al (2009). Using random forest for reliable classification and cost-sensitive learning for medical diagnosis. BMC Bioinformatics, 10, 22. 14. Zhang GP, Berardi VL (1998). An investigation of neural networks in thyroid function diagnosis. Health Care Management Sci, 1, 29-37.
2018-07-20 01:23:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6035974025726318, "perplexity": 8849.896175888474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591455.76/warc/CC-MAIN-20180720002543-20180720022543-00448.warc.gz"}
https://msp.org/pjm/2017/286-2/pjm-v286-n2-p04-p.pdf
#### Vol. 286, No. 2, 2017 Recent Issues Vol. 301: 1 Vol. 300: 1  2 Vol. 299: 1  2 Vol. 298: 1  2 Vol. 297: 1  2 Vol. 296: 1  2 Vol. 295: 1  2 Vol. 294: 1  2 Online Archive Volume: Issue: The Journal Subscriptions Editorial Board Officers Special Issues Submission Guidelines Submission Form Contacts Author Index To Appear ISSN: 1945-5844 (e-only) ISSN: 0030-8730 (print) Other MSP Journals Identities involving cyclic and symmetric sums of regularized multiple zeta values ### Tomoya Machide Vol. 286 (2017), No. 2, 307–359 ##### Abstract There are two types of regularized multiple zeta values: harmonic and shuffle types. The first purpose of the present paper is to give identities involving cyclic sums of regularized multiple zeta values of both types for depth less than $5$. Michael Hoffman, in “Quasi-symmetric functions and mod $p$ multiple harmonic sums” (Kyushu Journal of Mathematics 69 (2015), 345–366) proved an identity involving symmetric sums of regularized multiple zeta values of harmonic type for arbitrary depth. The second purpose is to prove Hoffman’s identity for shuffle type. We also give a connection between the identities involving cyclic sums and symmetric sums, for depth less than $5$. However, your active subscription may be available on Project Euclid at https://projecteuclid.org/pjm We have not been able to recognize your IP address 34.231.247.139 as that of a subscriber to this journal. Online access to the content of recent issues is by subscription, or purchase of single articles. or by using our contact form.
2019-09-17 17:22:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4848591983318329, "perplexity": 3569.066084896747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573098.0/warc/CC-MAIN-20190917161045-20190917183045-00369.warc.gz"}
http://mathoverflow.net/revisions/30159/list
3 added 315 characters in body One instance, where classical mechanics has to be treated with 'functional analysis' are infinite dimensional systems. The prototypical example is the Korteweg-de Vries equation $$u_t + u_{xxx} + 6 u u_x = 0$$ which a priori looks like a non-linear PDE. The key now is that it is completely integrable, which means that one can associate to an equivalent evolution for operators on Hilbert spaces. Define $$L(t) = - \frac{d^2}{dx^2} + u(x,t)$$ as an operator on $L^2(\mathbb{R})$. Then this operator obeys $$L_t = [P, L],$$ where $P$ is another operator, one can construct from $u$. (The specific form doesn't matter). The operators $P$ and $L$ are known as Lax Pair. (The $P$ stands for Peter not for Pair ☺ ). This is just the Heisenberg picture of quantum mechanics, so one can use the tools developed there, i.e. functional analysis, to investigate this equation. Of special importance is something known as scattering theory. Just on a final point: KdV is a limit of Navier--Stokes, which is a classical system. P.S.: In shameless self-promotion for some details on another system, the Toda Lattice, where it is easier to see that it is classical mechanics (one can write down the Hamiltonian easily), see here. I just made the post about KdV, since it is well-known. 2 added 99 characters in body One instance, where classical mechanics has to be treated with 'functional analysis' are infinite dimensional systems. The prototypical example is the Korteweg-de Vries equation $$u_t + u_{xxx} + 6 u u_t u_x = 0$$ which a priori looks like a non-linear PDE. The key now is that it is completely integrable, which means that one can associate to an equivalent evolution for operators on Hilbert spaces. Define $$L(t) = - \frac{d^2}{dx^2} + u(x,t)$$ as an operator on $L^2(\mathbb{R})$. Then this operator obeys $$L_t = [P, L],$$ where $P$ is another operator, one can construct from $u$. (The specific form doesn't matter). The operators $P$ and $L$ are known as Lax Pair. (The $P$ stands for Peter not for Pair ). This is just the Heisenberg picture of quantum mechanics, so one can use the tools developed there, i.e. functional analysis, to investigate this equation. Of special importance is something known as scattering theory. Just on a final point: KdV is a limit of Navier--Stokes, which is a classical system. 1 One instance, where classical mechanics has to be treated with 'functional analysis' are infinite dimensional systems. The prototypical example is the Korteweg-de Vries equation $$u_t + u_{xxx} + 6 u u_t = 0$$ which a priori looks like a non-linear PDE. The key now is that it is completely integrable, which means that one can associate to an equivalent evolution for operators on Hilbert spaces. Define $$L(t) = - \frac{d^2}{dx^2} + u(x,t)$$ as an operator on $L^2(\mathbb{R})$. Then this operator obeys $$L_t = [P, L],$$ where $P$ is another operator, one can construct from $u$. (The specific form doesn't matter). This is just the Heisenberg picture of quantum mechanics, so one can use the tools developed there, i.e. functional analysis, to investigate this equation. Of special importance is something known as scattering theory. Just on a final point: KdV is a limit of Navier--Stokes, which is a classical system.
2013-05-25 23:55:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.937745988368988, "perplexity": 225.47214685401582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706472050/warc/CC-MAIN-20130516121432-00033-ip-10-60-113-184.ec2.internal.warc.gz"}
https://includestdio.com/7455.html
# python – How to change legend size with matplotlib.pyplot ## The Question : 358 people think this question is useful Simple question here: I’m trying to get the size of my legend using matplotlib.pyplot to be smaller (i.e., the text to be smaller). The code I’m using goes something like this: plot.figure() plot.scatter(k, sum_cf, color='black', label='Sum of Cause Fractions') plot.scatter(k, data[:, 0], color='b', label='Dis 1: cf = .6, var = .2') plot.scatter(k, data[:, 1], color='r', label='Dis 2: cf = .2, var = .1') plot.scatter(k, data[:, 2], color='g', label='Dis 3: cf = .1, var = .01') plot.legend(loc=2) 593 people think this answer is useful You can set an individual font size for the legend by adjusting the prop keyword. plot.legend(loc=2, prop={'size': 6}) This takes a dictionary of keywords corresponding to matplotlib.font_manager.FontProperties properties. See the documentation for legend: Keyword arguments: prop: [ None | FontProperties | dict ] A matplotlib.font_manager.FontProperties instance. If prop is a dictionary, a new instance will be created with prop. If None, use rc settings. It is also possible, as of version 1.2.1, to use the keyword fontsize. 69 people think this answer is useful This should do import pylab as plot params = {'legend.fontsize': 20, 'legend.handlelength': 2} plot.rcParams.update(params) Then do the plot afterwards. There are a ton of other rcParams, they can also be set in the matplotlibrc file. Also presumably you can change it passing a matplotlib.font_manager.FontProperties instance but this I don’t know how to do. –> see Yann’s answer. 65 people think this answer is useful using import matplotlib.pyplot as plt Method 1: specify the fontsize when calling legend (repetitive) plt.legend(fontsize=20) # using a size in points plt.legend(fontsize="x-large") # using a named size With this method you can set the fontsize for each legend at creation (allowing you to have multiple legends with different fontsizes). However, you will have to type everything manually each time you create a legend. (Note: @Mathias711 listed the available named fontsizes in his answer) Method 2: specify the fontsize in rcParams (convenient) plt.rc('legend',fontsize=20) # using a size in points plt.rc('legend',fontsize='medium') # using a named size With this method you set the default legend fontsize, and all legends will automatically use that unless you specify otherwise using method 1. This means you can set your legend fontsize at the beginning of your code, and not worry about setting it for each individual legend. If you use a named size e.g. 'medium', then the legend text will scale with the global font.size in rcParams. To change font.size use plt.rc(font.size='medium') 44 people think this answer is useful There are also a few named fontsizes, apart from the size in points: xx-small x-small small medium large x-large xx-large Usage: pyplot.legend(loc=2, fontsize = 'x-small') 20 people think this answer is useful There are multiple settings for adjusting the legend size. The two I find most useful are: • labelspacing: which sets the spacing between label entries in multiples of the font size. For instance with a 10 point font, legend(..., labelspacing=0.2) will reduce the spacing between entries to 2 points. The default on my install is about 0.5. • prop: which allows full control of the font size, etc. You can set an 8 point font using legend(..., prop={'size':8}). The default on my install is about 14 points. In addition, the legend documentation lists a number of other padding and spacing parameters including: borderpad, handlelength, handletextpad, borderaxespad, and columnspacing. These all follow the same form as labelspacing and area also in multiples of fontsize. These values can also be set as the defaults for all figures using the matplotlibrc file. 6 people think this answer is useful Now in 2020, with matplotlib 3.2.2 you can set your legend fonts with plt.legend(title="My Title", fontsize=10, title_fontsize=15) where fontsize is the font size of the items in legend and title_fontsize is the font size of the legend title. More information in matplotlib documentation 5 people think this answer is useful On my install, FontProperties only changes the text size, but it’s still too large and spaced out. I found a parameter in pyplot.rcParams: legend.labelspacing, which I’m guessing is set to a fraction of the font size. I’ve changed it with pyplot.rcParams.update({'legend.labelspacing':0.25}) I’m not sure how to specify it to the pyplot.legend function – passing prop={'labelspacing':0.25} or prop={'legend.labelspacing':0.25} comes back with an error. 1 people think this answer is useful plot.legend(loc = ‘lower right’, decimal_places = 2, fontsize = ’11’, title = ‘Hey there’, title_fontsize = ’20’) 0 people think this answer is useful you can reduce the legend size setting: plt.legend(labelspacing=y, handletextpad=x,fontsize) labelspacing is the vertical space between each label.
2021-03-06 03:59:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45328882336616516, "perplexity": 4905.625715239557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374391.90/warc/CC-MAIN-20210306035529-20210306065529-00433.warc.gz"}
https://www.physicsforums.com/threads/power-in-solenoids.341312/
Power in solenoids 1. Sep 29, 2009 physicsnoobie so far i've been studying that P= work done (energy change) / time = I x V and its all been good... until i came across P= I squared x R (i know how this is derived) when studying solenoids in Magnetic Resonance Imaging and this was refered to as the power loss (energy dissipation over time) 1. i found this somewhat confusing.. since it suggests that the process: electrical energy -> heat energy (losses), is the only energy change process that is going on in a solenoid i thought about it and so far im guessing that maybe this is true in solenoids but not in normal electric circuits because electrical energy -> heat energy (losses) + light energy (bulbs) anyway, this was in the context of why copper is not used in the solenoid of MRI.. because the large combined resistance would result in large power losses.. instead superconductors with zero/negligible resistance is used 2. then again, does that mean that power of the solenoid in MRI = zero/infinitely small because the resistance is negligible? (following P = I squared x R) anyway, thanks to anyone who shares some light on this.. hopefully in simple terms.. Im quite a newbie in physics :( 2. Sep 29, 2009 Gear300 The power when considering P = I^2*R is explicitly noted as the rate at which energy is supplied to a resistance. The power of the solenoid, I'm guessing, would be the rate at which energy is supplied to the magnetic field in the solenoid. If I remember correctly, the energy of the magnetic field of a solenoid may be given as U = (1/2)LI^2; so if the current is constant, the power is 0W. In the case of non-steady state circuits or AC circuits, the current does change. Last edited: Sep 29, 2009
2017-08-20 20:01:49
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8446985483169556, "perplexity": 916.5927794940092}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106984.52/warc/CC-MAIN-20170820185216-20170820205216-00095.warc.gz"}
https://fr.maplesoft.com/support/help/view.aspx?path=StudyGuides/MultivariateCalculus/Chapter7/Examples/Section7-5/Example7-5-9
Example 7-5-9 - Maple Help Chapter 7: Triple Integration Section 7.5: Spherical Coordinates Example 7.5.9 Sketch the region enclosed by the surface $\mathrm{ρ}=1-\mathrm{cos}\left(\mathrm{φ}\right)$. For more information on Maplesoft products and services, visit www.maplesoft.com
2023-03-21 07:41:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 20, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43283742666244507, "perplexity": 8101.248843960888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943637.3/warc/CC-MAIN-20230321064400-20230321094400-00243.warc.gz"}
https://gmatclub.com/forum/anthony-and-michael-sit-on-the-six-member-board-of-directors-102027.html
It is currently 19 Jan 2018, 20:54 GMAT Club Daily Prep Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History Events & Promotions Events & Promotions in June Open Detailed Calendar Anthony and Michael sit on the six-member board of directors new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Author Message TAGS: Hide Tags Intern Joined: 19 Sep 2010 Posts: 25 Kudos [?]: 135 [5], given: 0 Anthony and Michael sit on the six-member board of directors [#permalink] Show Tags 01 Oct 2010, 07:39 5 KUDOS 41 This post was BOOKMARKED 00:00 Difficulty: 85% (hard) Question Stats: 47% (01:07) correct 53% (01:35) wrong based on 616 sessions HideShow timer Statistics Anthony and Michael sit on the six-member board of directors for company X. If the board is to be split up into 2 three-person subcommittees, what percent of all the possible subcommittees that include Michael also include Anthony? A. 20% B. 30% C. 40% D. 50% E. 60% [Reveal] Spoiler: OA Kudos [?]: 135 [5], given: 0 Math Expert Joined: 02 Sep 2009 Posts: 43335 Kudos [?]: 139525 [22], given: 12794 Anthony and Michael sit on the six-member board of directors [#permalink] Show Tags 01 Oct 2010, 07:42 22 KUDOS Expert's post 34 This post was BOOKMARKED Barkatis wrote: Anthony and Michael sit on the six-member board of directors for company X. If the board is to be split up into 2 three-person subcommittees, what percent of all the possible subcommittees that include Michael also include Anthony? A. 20% B. 30% C. 40% D. 50% E. 60% First approach: Let's take the group with Michael: there is a place for two other members and one of them should be taken by Anthony, as there are total of 5 people left, hence there is probability of $$\frac{2}{5}=40\%$$. Second approach: Again in Michael's group 2 places are left, # of selections of 2 out of 5 is $$C^2_5=10$$ = total # of outcomes. Select Anthony - $$C^1_1=1$$, select any third member out of 4 - $$C^1_4=4$$, total # $$=C^1_1*C^1_4=4$$ - total # of winning outcomes. $$P=\frac{# \ of \ winning \ outcomes}{total \ # \ of \ outcomes}=\frac{4}{10}=40\%$$ Third approach: Michael's group: Select Anthony as a second member out of 5 - 1/5 and any other as a third one out of 4 left 4/4, total $$=\frac{1}{5}*\frac{4}{4}=\frac{1}{5}$$; Select any member but Anthony as second member out of 5 - 4/5 and Anthony as a third out of 4 left 1/4, $$total=\frac{4}{5}*\frac{1}{4}=\frac{1}{5}$$; $$Sum=\frac{1}{5}+\frac{1}{5}=\frac{2}{5}=40\%$$ Fourth approach: Total # of splitting group of 6 into two groups of 3: $$\frac{C^3_6*C^_3}{2!}=10$$; # of groups with Michael and Anthony: $$C^1_1*C^1_1*C^1_4=4$$. $$P=\frac{4}{10}=40\%$$ Hope it helps. _________________ Kudos [?]: 139525 [22], given: 12794 Intern Joined: 19 Sep 2010 Posts: 25 Kudos [?]: 135 [0], given: 0 Re: Probability - MGMAT Test [#permalink] Show Tags 01 Oct 2010, 08:05 Thanks. Just a question: I noticed that you responded very quickly to my posts. Which I thank you for. But I am wondering if it's ok that I post questions that have been obviously posted before. If it's a problem then please tell me how can I check whether a question is already on the forum or not. Thanks. Kudos [?]: 135 [0], given: 0 Math Expert Joined: 02 Sep 2009 Posts: 43335 Kudos [?]: 139525 [2], given: 12794 Re: Probability - MGMAT Test [#permalink] Show Tags 01 Oct 2010, 08:26 2 KUDOS Expert's post Barkatis wrote: Thanks. Just a question: I noticed that you responded very quickly to my posts. Which I thank you for. But I am wondering if it's ok that I post questions that have been obviously posted before. If it's a problem then please tell me how can I check whether a question is already on the forum or not. Thanks. Generally it's a good idea to do the search before posting, for example I would search in PS subforum (there is a search field above the topics) for the word "Anthony", don't think that there are many questions with this name. Though it's not a probelm to post a question that was posted before: if moderators find previous discussions they will merge the topics, copy the solution from there or give a link to it. _________________ Kudos [?]: 139525 [2], given: 12794 Intern Joined: 19 Sep 2010 Posts: 25 Kudos [?]: 135 [0], given: 0 Re: Probability - MGMAT Test [#permalink] Show Tags 04 Oct 2010, 11:56 Bunuel, A question just came to my mind concerning the second approach of the solution that you proposed: total # of winning outcomes Select Anthony: 1C1=1, select any third member out of 4: 4C1=4, total # =1C1*4C1=4 If we say 1C1*4C1 don't we assume that the winning can be either (Anthony,Third member) or (Third member, Anthony) but not both ? shouldn't we multiply 1C1*4C1 by 2 to get the total number of possible combinations ? Kudos [?]: 135 [0], given: 0 Math Expert Joined: 02 Sep 2009 Posts: 43335 Kudos [?]: 139525 [0], given: 12794 Re: Probability - MGMAT Test [#permalink] Show Tags 04 Oct 2010, 12:12 Barkatis wrote: Bunuel, A question just came to my mind concerning the second approach of the solution that you proposed: total # of winning outcomes Select Anthony: 1C1=1, select any third member out of 4: 4C1=4, total # =1C1*4C1=4 If we say 1C1*4C1 don't we assume that the winning can be either (Anthony,Third member) or (Third member, Anthony) but not both ? shouldn't we multiply 1C1*4C1 by 2 to get the total number of possible combinations ? We are counting # of committees with Anthony and Michael: {M,A,1}; {M,A,2}; {M,A,3}; {M,A,4}. Here {M,A,1} is the same committee as {M,1,A}. _________________ Kudos [?]: 139525 [0], given: 12794 Intern Joined: 06 Nov 2010 Posts: 23 Kudos [?]: 39 [0], given: 16 Re: Probability - MGMAT Test [#permalink] Show Tags 21 Jan 2011, 15:54 Bunuel, can you advise why do we have to divide 6C3 * 3C3 by 2!, in the fourth approach ? Kudos [?]: 39 [0], given: 16 Math Expert Joined: 02 Sep 2009 Posts: 43335 Kudos [?]: 139525 [0], given: 12794 Re: Probability - MGMAT Test [#permalink] Show Tags 21 Jan 2011, 16:14 Expert's post 6 This post was BOOKMARKED praveenvino wrote: Bunuel, can you advise why do we have to divide 6C3 * 3C3 by 2!, in the fourth approach ? We are dividing by 2! (factorial of the # of groups) because the order of the groups is not important (we don't have the group #1 and the group #2) and we need to get rid of the duplications. Dividing a group into subgroups: combinations-problems-95344.html split-the-group-101813.html 9-people-and-combinatorics-101722.html ways-to-divide-99053.html combination-and-selection-into-team-106277.html ways-to-split-a-group-of-6-boys-into-two-groups-of-3-boys-ea-105381.html _________________ Kudos [?]: 139525 [0], given: 12794 Senior Manager Joined: 13 Aug 2012 Posts: 456 Kudos [?]: 589 [3], given: 11 Concentration: Marketing, Finance GPA: 3.23 Re: Anthony and Michael sit on the six-member board of directors [#permalink] Show Tags 28 Dec 2012, 01:39 3 KUDOS Barkatis wrote: Anthony and Michael sit on the six-member board of directors for company X. If the board is to be split up into 2 three-person subcommittees, what percent of all the possible subcommittees that include Michael also include Anthony? A. 20% B. 30% C. 40% D. 50% E. 60% Here is my approach: I first counted the possible creation of 2 subcommittees without restriction: $$\frac{6!}{3!3!}* \frac{3!}{3!}= 20$$ Then, I now proceed to counting the number of ways to create committee with Michael and Anthony together. M A _ + _ _ _ = $$\frac{4!}{1!3!} * \frac{1!}{1!} = 4$$ MA could be in group#1 or group#2. Thus, $$=4*2 = 8$$ Final calculation: $$8/20 = 4/10 = 40%$$ _________________ Impossible is nothing to God. Kudos [?]: 589 [3], given: 11 Intern Joined: 25 May 2014 Posts: 22 Kudos [?]: 8 [0], given: 13 GPA: 3.55 Re: Anthony and Michael sit on the six-member board of directors [#permalink] Show Tags 08 Jun 2015, 08:24 Bunuel wrote: Barkatis wrote: Anthony and Michael sit on the six-member board of directors for company X. If the board is to be split up into 2 three-person subcommittees, what percent of all the possible subcommittees that include Michael also include Anthony? A. 20% B. 30% C. 40% D. 50% E. 60% First approach: Let's take the group with Michael: there is a place for two other members and one of them should be taken by Anthony, as there are total of 5 people left, hence there is probability of 2/5=40%. Second approach: Again in Michael's group 2 places are left, # of selections of 2 out of 5 5C2=10 - total # of outcomes. Select Anthony - 1C1=1, select any third member out of 4 - 4C1=4, total # =1C1*4C1=4 - total # of winning outcomes. P=# of winning outcomes/# of outcomes=4/10=40% Third approach: Michael's group: Select Anthony as a second member out of 5 - 1/5 and any other as a third one out of 4 left 4/4, total=1/5*4/4=1/5; Select any member but Anthony as second member out of 5 - 4/5 and Anthony as a third out of 4 left 1/4, total=4/5*1/4=1/5; Sum=1/5+1/5=2/5=40% Fourth approach: Total # of splitting group of 6 into two groups of 3: 6C3*3C3/2!=10 # of groups with Michael and Anthony: 1C1*1C1*4C1=4 P=4/10=40% Hope it helps. Bunuel, my question is for your fourth approach: Why don't we have to divide "# of groups with Michael and Anthony: 1C1*1C1*4C1=4" by 2! anymore? To expound. For total # of groups: 6C3 to choose 3 people for first group. 3C3 to choose 3 people for second group. (6C3)(3C3)/2! divide by 2! as order is not important. For # groups with Michael and Anthony: 1C1 to choose 1st person for group 1, 1C1 to choose 2nd person for group 1, 4C1 to choose 3rd person for group 1 = 4 3C3 to choose 3 people for second group. (1C1)(1C1)(4C1)(3C3) <-- how come we no longer need to divide this by 2! ? Kudos [?]: 8 [0], given: 13 Current Student Joined: 18 Oct 2014 Posts: 902 Kudos [?]: 472 [0], given: 69 Location: United States GMAT 1: 660 Q49 V31 GPA: 3.98 Re: Anthony and Michael sit on the six-member board of directors [#permalink] Show Tags 08 Jun 2015, 18:43 Hi! Just a question here. Number of ways to Divide 3 people out of 6 are- 6C3= 20 Ways Number of ways by which 1st member is Michal, 2nd is Anthony and 3rd is anyone from remaining four are- 1X1X4= 4 ways So probability= number of desired events/number of total events *100= 4/20*100= 20% This is the probability of Michal and Anthony being present in group 1. Group 2 will also have the same probability of 20%. So total probability is 20+20=40% Please suggest if I am wrong anywhere. Thanks _________________ I welcome critical analysis of my post!! That will help me reach 700+ Kudos [?]: 472 [0], given: 69 Manager Joined: 14 Mar 2014 Posts: 148 Kudos [?]: 181 [0], given: 124 GMAT 1: 710 Q50 V34 Re: Lampard and Terry sit on the six member board [#permalink] Show Tags 10 Aug 2015, 07:37 1 This post was BOOKMARKED VenoMfTw wrote: Lampard and Terry sit on the six member board of directors for company X. If the board is to be split up into 2 three - person subcommittees, what percent of all the possible subcommittees that include Terry also include Lampard? a. 20 b. 30 c. 40 d. 50 e. 60 All Possible Subcommittees that has terry in it = T _ _ --> 5c2 = 10 Possible subcommittees that include Terry also include Lampard --> T L _ --> 4c1 = 4 Percentage = ( 4 /10 ) * 100 = 40 Option C _________________ I'm happy, if I make math for you slightly clearer And yes, I like kudos ¯\_(ツ)_/¯ Kudos [?]: 181 [0], given: 124 Intern Joined: 26 Jul 2015 Posts: 20 Kudos [?]: 6 [0], given: 12 Re: Anthony and Michael sit on the six-member board of directors [#permalink] Show Tags 11 Aug 2015, 21:16 Bunuel, some day you should take this test and DEMOLISH it. http://www.matrix67.com/iqtest/ Cheers! Kudos [?]: 6 [0], given: 12 Senior Manager Joined: 19 Oct 2012 Posts: 343 Kudos [?]: 69 [0], given: 103 Location: India Concentration: General Management, Operations GMAT 1: 660 Q47 V35 GMAT 2: 710 Q50 V38 GPA: 3.81 WE: Information Technology (Computer Software) Re: Anthony and Michael sit on the six-member board of directors [#permalink] Show Tags 07 Oct 2015, 07:11 I want to put my approach out there too. Bunuel, please vet this if you can. So, the total combinations of making a 3 group out of 6 members: 20. there are 2 subcommittees. So say, 1st group has Micheal and Anthony and 2nd group doesn't; so: M A __(any of 4 remaining members)__ = 4 possibilities Similarly, say 2nd group has Micheal and Anthony and 1st group doesnt; so: M A ___(Any of remaining 4 members)___ = 4 possibilities. %tage having MA in same group = (4+4)/20 x 100 = 40 % _________________ Citius, Altius, Fortius Kudos [?]: 69 [0], given: 103 Intern Joined: 26 Sep 2015 Posts: 3 Kudos [?]: [0], given: 456 Re: Anthony and Michael sit on the six-member board of directors [#permalink] Show Tags 07 Dec 2015, 16:13 Bunuel wrote: Barkatis wrote: Anthony and Michael sit on the six-member board of directors for company X. If the board is to be split up into 2 three-person subcommittees, what percent of all the possible subcommittees that include Michael also include Anthony? A. 20% B. 30% C. 40% D. 50% E. 60% First approach: Let's take the group with Michael: there is a place for two other members and one of them should be taken by Anthony, as there are total of 5 people left, hence there is probability of 2/5=40%. Second approach: Again in Michael's group 2 places are left, # of selections of 2 out of 5 5C2=10 - total # of outcomes. Select Anthony - 1C1=1, select any third member out of 4 - 4C1=4, total # =1C1*4C1=4 - total # of winning outcomes. P=# of winning outcomes/# of outcomes=4/10=40% Third approach: Michael's group: Select Anthony as a second member out of 5 - 1/5 and any other as a third one out of 4 left 4/4, total=1/5*4/4=1/5; Select any member but Anthony as second member out of 5 - 4/5 and Anthony as a third out of 4 left 1/4, total=4/5*1/4=1/5; Sum=1/5+1/5=2/5=40% Fourth approach: Total # of splitting group of 6 into two groups of 3: 6C3*3C3/2!=10 # of groups with Michael and Anthony: 1C1*1C1*4C1=4 P=4/10=40% Hope it helps. Hi Bunnuel, On approach 3, are you implying that the order does matter? By calculating the odds of picking Anthony in the second position and anyone else in the third position AND the odds of picking anyone except Anthony in the second position and Anthony in the 3rd position you are basically establishing that the order does matter (whether Anthony is in the second or 3rd position). Can you clarify? I am sure there is something wronog in my thought process. Thanks. Kudos [?]: [0], given: 456 Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 7867 Kudos [?]: 18480 [1], given: 237 Location: Pune, India Re: Anthony and Michael sit on the six-member board of directors [#permalink] Show Tags 17 Dec 2015, 03:32 1 KUDOS Expert's post jegf1987 wrote: Bunuel wrote: Barkatis wrote: Anthony and Michael sit on the six-member board of directors for company X. If the board is to be split up into 2 three-person subcommittees, what percent of all the possible subcommittees that include Michael also include Anthony? A. 20% B. 30% C. 40% D. 50% E. 60% First approach: Let's take the group with Michael: there is a place for two other members and one of them should be taken by Anthony, as there are total of 5 people left, hence there is probability of 2/5=40%. Second approach: Again in Michael's group 2 places are left, # of selections of 2 out of 5 5C2=10 - total # of outcomes. Select Anthony - 1C1=1, select any third member out of 4 - 4C1=4, total # =1C1*4C1=4 - total # of winning outcomes. P=# of winning outcomes/# of outcomes=4/10=40% Third approach: Michael's group: Select Anthony as a second member out of 5 - 1/5 and any other as a third one out of 4 left 4/4, total=1/5*4/4=1/5; Select any member but Anthony as second member out of 5 - 4/5 and Anthony as a third out of 4 left 1/4, total=4/5*1/4=1/5; Sum=1/5+1/5=2/5=40% Fourth approach: Total # of splitting group of 6 into two groups of 3: 6C3*3C3/2!=10 # of groups with Michael and Anthony: 1C1*1C1*4C1=4 P=4/10=40% Hope it helps. Hi Bunnuel, On approach 3, are you implying that the order does matter? By calculating the odds of picking Anthony in the second position and anyone else in the third position AND the odds of picking anyone except Anthony in the second position and Anthony in the 3rd position you are basically establishing that the order does matter (whether Anthony is in the second or 3rd position). Can you clarify? I am sure there is something wronog in my thought process. Thanks. Responding to a pm: No, we know very well that order does not matter when forming groups. Here, you have to form a group/subcommittee so the order in which you pick people is irrelevant. That said, we consider order in method 3 because of the limitations of this particular method. We are using probability. I can find the probability that the "next" guy I pick is Anthony. But how do I find the probability that Anthony is one of the next two guys I pick? For that, I have to use two steps: - The next one is Anthony or - Next to next one is Anthony We add these two probabilities and get the probability that either of the next two guys is Anthony. _________________ Karishma Veritas Prep | GMAT Instructor My Blog Get started with Veritas Prep GMAT On Demand for \$199 Veritas Prep Reviews Kudos [?]: 18480 [1], given: 237 Intern Joined: 31 Oct 2015 Posts: 36 Kudos [?]: 7 [0], given: 53 Re: Anthony and Michael sit on the six-member board of directors [#permalink] Show Tags 20 Jan 2016, 15:50 See the attached image fir explanation. Thanks. Attachments SmartSelectImage_2016-01-20-18-49-12.png.png [ 671.74 KiB | Viewed 1818 times ] Kudos [?]: 7 [0], given: 53 Intern Joined: 03 May 2016 Posts: 14 Kudos [?]: 3 [0], given: 23 Location: United States GMAT 1: 710 Q58 V48 GPA: 3.46 Probability Question : Anthony and Michael sit on the six-member boad [#permalink] Show Tags 15 Jun 2016, 09:32 Anthony and Michael sit on the six-member boad of directors for company X. If the board is to be split up into 2 three-person subcommittees, what percent of all the possible subcommitteees that include Micahel and also Anthony? A) 30% B) 40% C) 50% D) 60% E) 70% This is from an old MGMT Practice test. so 3 person chosen from 6 people. 6C3 would be the total: 20 Because Michael and also Anthony have to be together, 6C3 / 2! would be the probability. I didn't get the correct answer. Help! Kudos [?]: 3 [0], given: 23 SVP Joined: 11 Sep 2015 Posts: 1995 Kudos [?]: 2869 [0], given: 364 Re: Anthony and Michael sit on the six-member board of directors [#permalink] Show Tags 15 Jun 2016, 10:34 Expert's post Top Contributor Barkatis wrote: Anthony and Michael sit on the six-member board of directors for company X. If the board is to be split up into 2 three-person subcommittees, what percent of all the possible subcommittees that include Michael also include Anthony? A. 20% B. 30% C. 40% D. 50% E. 60% We can think of this as a probability question: What is the probability that Anthony and Michael are on the same subcommittee? Now, assume that we're creating subcommittees. We want to place 6 people in the following spaces: _ _ _ | _ _ _ First, we place Michael in one subcommittee; it makes no difference which one: M _ _ | _ _ _ When we go to place Anthony, we see that there are 5 spaces remaining. 2 spaces are on the same subcommittee as Michael. So the probability that they are on the same subcommittee is 2/5 = 40% [Reveal] Spoiler: C _________________ Brent Hanneson – Founder of gmatprepnow.com Kudos [?]: 2869 [0], given: 364 Senior Manager Joined: 21 Aug 2016 Posts: 300 Kudos [?]: 24 [0], given: 136 Location: India GPA: 3.9 WE: Information Technology (Computer Software) Re: Anthony and Michael sit on the six-member board of directors [#permalink] Show Tags 07 May 2017, 02:00 Hi Bunuel, I understood all the approaches provided by you. Please guide why is the below approach is incorrect? We have to form 2 committee of 3 persons each. Anthony can be selected in one of the two committees in 2 ways, but ,as per the condition, Michael should be in the same committee, so once Anthony 's committee is selected, there is only one way of selecting Michael's committee. p=fav case/ total cases P= (2*1)/2*2 p=1/2 Thanks a lot Kudos [?]: 24 [0], given: 136 Re: Anthony and Michael sit on the six-member board of directors   [#permalink] 07 May 2017, 02:00 Go to page    1   2    Next  [ 30 posts ] Display posts from previous: Sort by Anthony and Michael sit on the six-member board of directors new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2018-01-20 04:54:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41801711916923523, "perplexity": 4644.156453067461}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889325.32/warc/CC-MAIN-20180120043530-20180120063530-00554.warc.gz"}
https://www.heldermann.de/JLT/JLT30/JLT303/jlt30035.htm
Journal Home Page Cumulative Index List of all Volumes Complete Contentsof this Volume Previous Article Journal of Lie Theory 30 (2020), No. 3, 673--690Copyright Heldermann Verlag 2020 Some Harmonic Analysis on Commutative Nilmanifolds Andrea L. Gallo FaMAF -- CIEM -- CONICET, Universidad Nacional, Córdoba 5000, Argentina andregallo88@gmail.com Linda V. Saal FaMAF -- CIEM -- CONICET, Universidad Nacional, Córdoba 5000, Argentina saal@mate.uncor.edu [Abstract-pdf] We consider a family of Gelfand pairs $(K \ltimes N, N)$ (in short $(K,N)$) where $N$ is a two step nilpotent Lie group, and $K$ is the group of orthogonal automorphisms of $N$. This family has a nice analytic property: almost all these 2-step nilpotent Lie group have square integrable representations. In these cases, following Moore-Wolf's theory, we find an explicit expression for the inversion formula of $N$, and as a consequence, we decompose the regular action of $K \ltimes N$ on $L^{2}(N)$. This explicit expression for the Fourier inversion formula of $N$, specialized to a class of commutative nilmanifolds described by J.\,Lauret, sharpens the analysis of J.\,A.\,Wolf in Section 14.5 in {\it Harmonic Analysis on Commutative Spaces} [Mathematical Surveys and Monographs 142, American Mathematical Society, Providence (2007)], and in {\it On the analytic structure of commutative nilmanifolds} [J. Geometric Analysis 26 (2016) 1011--1022], concerning the regular action of $K \ltimes N$ on $L^2(N)$. When $N$ is the Heisenberg group, we obtain the decomposition of $L^{2}(N)$ under the action of $K \ltimes N$ for all $K$ such that $(K,N)$ is a Gelfand pair. Finally, we also give a parametrization for the generic spherical functions associated to the pair $(K,N)$, and we give an explicit expression for these functions in some cases. Keywords: Gelfand pairs, inversion formula, nilpotent Lie group, regular representation. MSC: 43A80, 22E25. [ Fulltext-pdf  (190  KB)] for subscribers only.
2022-08-11 14:31:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8316590189933777, "perplexity": 640.9842684500678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571472.69/warc/CC-MAIN-20220811133823-20220811163823-00359.warc.gz"}
http://cs.oberlin.edu/~cs150/lab-8/warmup/
# Warmup Disclaimer: This week’s warmup has a lot of text. Don’t worry! There’s not that much actual work. ### Part A: Musical Math The main lab for this week involves audio processing. This exercise will cover some of the basics of how sound works in a computer. #### Making Waves Sound is transmitted through air as a pressure wave, alternating regions of high- and low-pressure air, that cause our inner ears (and other things) to vibrate. The image you might have in your head of a “sound wave” is a graph of the air pressure hitting your eardrum over time. The peaks are high pressure, and the troughs are low pressure. We can model these pressure fluctuations with a sine wave, which is what we’ll be working with this week. Choice of Waveforms There are a number of different mathematical waveforms that we could use to model a sound wave. For example, we could use a triangle wave (shown below). Triangle waves have a sharp transition between the rising and falling portion of the wave. Such harsh changes in pressure are not often found in nature. This is one of the reasons we are choosing the gradually changing sine wave for our model. There are two general principles that hold when dealing with sound waves. • The higher the peaks and lower the troughs, the louder the sound. • The closer together the peaks, the higher-pitched the sound. Let’s use the red sine wave from above as a reference. The blue sine wave below has higher amplitude (higher peaks and lower troughs), and will therefore produce a louder sound. The green sine wave, on the other hand, has the same amplitude but higher frequency (peaks are closer together). Therefore, the green sine wave with be the same volume but higher-pitched compared to the red sine wave. A real sound wave contains a lot of information. Every point in time is associated with a different air pressure. There are infinitely many such times, so how does a computer store all that information? It doesn’t. Instead, we use samples. Imagine just checking the value every 1/sample_rate seconds, where sample_rate is the number of samples you want to take each second. You would end up storing the values at certain points on the waveform (e.g., corresponding to the dots in the image below). These values can be stored in a list. The benefit of modelling sound as a sine wave is that we have a function for the pressure over time (namely, the sine function), and so we can use that formula to compute the values of our samples. The value for the pressure at a given time $t$ is $$\mathrm{pressure}(t) = a \times \sin(2\pi \times f \times t)$$ where • $t$ is the time measured in seconds, • $a$ is the amplitude of the wave (higher = louder), • $\sin()$ is the sine function, • $\pi$ is the constant 3.14159, and • $f$ is the frequency of the wave in cycles per second or Hertz (higher = higher-pitched). If you want to use Python to generate a list of samples from a sine wave, just append each successive sample value to the list using the following formula: samples.append(amp * math.sin(2 * math.pi * freq * i/sample_rate)) Here’s a breakdown of the formula’s new components: • samples[i] is a reference to the ith element of the samples list, • amp is the amplitutde $a$, • math.sin() is the sine function from the math module, • math.pi is Python’s stored value of the constant $\pi = 3.14156265…$, • freq is the frequency $f$, and • i/sample_rate is the calculation of time $t$ for sample i. This formula aboves allows us to take a sample whenever the time is a multiple of 1/sample_rate. Ok, so here’s the exercise. With your partner, use the formula for samples to compute the first 10 samples (i.e., i=0,...,9) when amp=1, freq=1, and sample_rate=10. You can use the console to do the computation. If you have a convenient way of plotting them, even better! You should see how they form the general shape of a wave. #### Pitch You won’t just be making sound in this lab, you’ll be making music. The Western scale has 12 notes, which repeat as you go up in pitch. Each corresponds to a key on the piano keyboard. After 12 notes (somewhat unhelpfully called half tones), you’ve traveled an octave, after which point you’ll hear the same notes as before, but higher in pitch. Here’s the weird thing: when you go up 12 notes, the frequency doubles. In general, what you see as linear progress up a keyboard is actually a multiplicative change in frequency. Since an octave is divided into 12 half tones, the doubling of frequency is divided up across the 12 tones as well. In other words, increasing the frequency by 1 half tone increases the frequency by a factor of 2**(1/12) (i.e., $\sqrt[12]{2}$). Here are some examples to illustrate the discussion above. • The A note that orchestras usually tune to has a frequency of 440Hz. • The A an octave below that has a frequency of 220Hz (this note is called A220). • The note one half tone above A220, an A#, has a frequency of 220*(2**(1/12)). • The C between A220 and A440 is sometimes called “Middle C”. It’s 3 half tones above A220 and has a frequency of 220*(2**(1/12))*(2**(1/12))*(2**(1/12)) which is more concisely written as 220*(2**(3/12)). Here’s a quick exercise for you to check your understanding. 1. What’s the frequency of the note an octave above A440 (A)? 2. What’s the frequency of the note 5 half tones above A440 (D)? 3. What’s the frequency of the note 2 half tones below A440 (G)? Note that the letter names aren’t important for getting the right answer. They’re just there in case you’re curious about note names. ### Part B: Opinion Dynamics #### The Problem In this exercise, you’ll see a couple of the most common object-oriented programming bugs and get practice testing object-oriented programs. In the process, you’ll do some computational social science—the code for this exercise models opinions spreading on social networks. Here’s the story you should have in mind: you’ve got a population of people, and each person starts with an initial opinion, which is a number between 0 and 1. Think of 0 as representing “I hate board games,” and 1 being “I love board games.” With something like that, it’s reasonable to expect people to be influenced over time by their friends’ opinions. The code here will crudely model this phenomenon. We’ll model our simple social network in the following way: each person will be a node, and they’ll have links pointing to the people they like. For example, you might have the following: Each person’s node is colored in proportional to their initial opinion: 0 is black, 1 is white, and in-between values would be gray. Note that it’s possible to like someone who doesn’t like you. Ang likes Boone, but Boone does not like Ang. Here’s how we’ll capture opinion dynamics in a step-by-step method. Each step, we’ll pick someone at random from the population. That person will then update their opinion by taking the average opinion of all the people they like, including themselves. In our example, if we chose Ang to update, their opinion would go from 0 to 0.33, because among the people they like, two people hate board games (themselves and Charlie), and one person loves board games (Boone). In the updated graph below, Ang’s shade got a bit lighter. If you update lots of times, you might hope the network will settle on a steady-state opinion, and that this opinion will depend on the network structure. Pretty cool! If you want, take a moment with your partner to discuss how you might set up such a simulation. #### The Program We have started implementing the Person class and some test code in opinion.py – it is set up to do the simulation as described above. Like many files which define and then test/use a class, we have the class definition at the top of the file and the main() function at the bottom. Your goal is to (a) debug some of the class methods and (b) test the simulation by adding and running code in main(). Start by taking a look at the Person class and then perform the following steps. Note that update() is broken and you’ll resolve that as the second step: • Add code to main() that will help you eventually test the update() method. Think about what the simplest example of updating might be. We suggest making 2 Person instances, one who likes video games and one who does not, and having one befriend the other. This is like having a network with two people and one arrow connection. If you follow these instructions, once you run update(), the befriended friend should have an opinion of 0.5. • You need to debug update() prior to testing it. The issue is that update() is missing four self. references. Figure out where they go, add them to the code, and use the test from above to check you fixed all the errors. Now let’s test the whole simuation! To do so, execute the following steps: • Read over the code in main() that runs the function update_step(), which picks a person and asks them to update. Spend some time trying to figure out what the test code does. • update_step() is currently broken. Use the test code to help you fix the bug in update_step(). Extra Now that you’ve got working code, play with it some! Here are some questions you could ask. You can use the debugging code as starter code, but modify it to perform a few experiments. You don’t need complete answers to these questions, but you and your partner should try and come up with some guesses. • If you call update_step() enough times, do opinions always converge to a common value? How many steps does it take? • If you rerun the code twice, do the opinions converge to the same value? • How do the final opinions depend on who is friends with whom? • How would you add to this model to make it more realistic?
2022-05-28 11:32:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.646075963973999, "perplexity": 901.5420326963463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016373.86/warc/CC-MAIN-20220528093113-20220528123113-00010.warc.gz"}
https://history.jes.su/s207987840001565-0-1/?sl=en
The Two Sides of Hungarian National Policies after the First World War Share Metrics The Two Sides of Hungarian National Policies after the First World War Annotation PII S207987840001565-0-1 DOI 10.18254/S0001565-0-1 Publication type Article Status Published Authors Edition Abstract The article is dedicated to the interwar period in Hungarian ethnic policy, due to new geopolitical circumstances, aimed at resolving the national question both within the country and outside, directed at the establishement of the so called “Hungarian world” on the territories of the neighboring countries with significant Hungarian population. In parallel, it examines the factors that led Hungary to participation in the Second World war on the side of Nazi Germany. Keywords Hungary, national question, the Treaty of Trianon, the Hungarian Diaspora in neighboring States 26.08.2016 Publication date 15.10.2016 Number of characters 25995 Number of purchasers 24 Views 7321 Full text is available to subscribers only Subscribe right now Only article 100 RUB / 1.0 SU Whole issue 1000 RUB / 10.0 SU All issues for 2016 2500 RUB / 50.0 SU ## References
2020-09-20 16:23:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3067144453525543, "perplexity": 5877.749004057789}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198287.23/warc/CC-MAIN-20200920161009-20200920191009-00740.warc.gz"}
https://help.sap.com/saphelp_globext607_10/helpdata/en/96/8b44fe43ce11d189ee0000e81ddfac/content.htm?no_cache=true
Line Layout Variants for the Line Item Display For the line item display, you can choose between several line layout variants. If you choose standard line layout in the line item display for a customer account, the most important data from the document header and line items is displayed. This includes for example, document number, document type, dunning block, dunning key, amount, and clearing document number. If you select the variant with terms of payment, information about the terms of payment is displayed in addition to the document number, amount, and the clearing document number. The above-mentioned and other variants are already defined in the standard system. You can change these variants or add new ones. You can define variants for individual account types or generally, for all account types. In addition to the information in the lines, you can also display other fields. The possible additional fields are already defined in the system. You can choose them from a selection screen in the display. If you require fields other than those defaulted in the standard system, you must specify these separately by account type. In addition, you can specify the sequence of the fields on the selection screen. You place the fields that you require most often at the top of the field list. Defining Variants If you want to define your own variants, proceed as follows: 1. Set the fields in the order in which they are displayed (see the following figure, 1) and define the display format (see the following figure, 2). If you do not want a field to be displayed in full, you can specify the area to be displayed via the display format. 2. Specify the names of your variants (see the following figure, 3), and the column headings for field display (see the following figure, 4). 3. Translate the names and column headings if you use functions where the variant is used in foreign languages. The figure below shows the definitions for standard line layout. If you choose this line layout in line item display, item information is displayed as in the figure below. The header information is transferred from the definition (see figure 4 above). The clearing document number field is shortened because only the last characters are required. The clearing document number is shown starting with the eighth character (offset 7), in length 3, with distance 1 from the previous column (see figure 2 above).
2022-10-05 17:27:21
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8303132653236389, "perplexity": 995.1011142631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337663.75/warc/CC-MAIN-20221005172112-20221005202112-00767.warc.gz"}
https://socratic.org/questions/if-an-object-with-uniform-acceleration-or-deceleration-has-a-speed-of-12-m-s-at--2
# If an object with uniform acceleration (or deceleration) has a speed of 12 m/s at t=0 and moves a total of 14 m by t=12, what was the object's rate of acceleration? $14 = 12 \times 12 + a \times {12}^{2}$ $a = - \frac{130}{144} = - \frac{65}{72} m {S}^{-} 2$
2019-08-23 19:53:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5927584767341614, "perplexity": 502.0197648558995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318986.84/warc/CC-MAIN-20190823192831-20190823214831-00142.warc.gz"}
https://thermo.readthedocs.io/thermo.wilson.html
# Wilson Gibbs Excess Model (thermo.wilson)¶ This module contains a class Wilson for performing activity coefficient calculations with the Wilson model. An older, functional calculation for activity coefficients only is also present, Wilson_gammas. For reporting bugs, adding feature requests, or submitting pull requests, please use the GitHub issue tracker. ## Wilson Class¶ class thermo.wilson.Wilson(T, xs, lambda_coeffs=None, ABCDEF=None, lambda_as=None, lambda_bs=None, lambda_cs=None, lambda_ds=None, lambda_es=None, lambda_fs=None)[source] Class for representing an a liquid with excess gibbs energy represented by the Wilson equation. This model is capable of representing most nonideal liquids for vapor-liquid equilibria, but is not recommended for liquid-liquid equilibria. The two basic equations are as follows; all other properties are derived from these. $g^E = -RT\sum_i x_i \ln\left(\sum_j x_j \lambda_{i,j} \right)$ $\Lambda_{ij} = \exp\left[a_{ij}+\frac{b_{ij}}{T}+c_{ij}\ln T + d_{ij}T + \frac{e_{ij}}{T^2} + f_{ij}{T^2}\right]$ Parameters Tfloat Temperature, [K] xslist[float] Mole fractions, [-] lambda_coeffslist[list[list[float]]], optional Wilson parameters, indexed by [i][j] and then each value is a 6 element list with parameters (a, b, c, d, e, f); either lambda_coeffs or the lambda parameters are required, [various] ABCDEFtuple(list[list[float]], 6), optional The lamba parameters can be provided as a tuple, [various] lambda_aslist[list[float]], optional a parameters used in calculating Wilson.lambdas, [-] lambda_bslist[list[float]], optional b parameters used in calculating Wilson.lambdas, [K] lambda_cslist[list[float]], optional c parameters used in calculating Wilson.lambdas, [-] lambda_dslist[list[float]], optional d paraemeters used in calculating Wilson.lambdas, [1/K] lambda_eslist[list[float]], optional e parameters used in calculating Wilson.lambdas, [K^2] lambda_fslist[list[float]], optional f parameters used in calculating Wilson.lambdas, [1/K^2] Notes In addition to the methods presented here, the methods of its base class thermo.activity.GibbsExcess are available as well. Warning If parameters are ommited for all interactions, this model reverts to thermo.activity.IdealSolution. In large systems it is common to only regress parameters for the most important components; set lambda parameters for other components to 0 to “ignore” them and treat them as ideal components. This class works with python lists, numpy arrays, and can be accelerated with Numba or PyPy quite effectively. References 1(1,2) Smith, H. C. Van Ness Joseph M. Introduction to Chemical Engineering Thermodynamics 4th Edition, Joseph M. Smith, H. C. Van Ness, 1987. 2 Kooijman, Harry A., and Ross Taylor. The ChemSep Book. Books on Demand Norderstedt, Germany, 2000. 3 Gmehling, Jürgen, Michael Kleiber, Bärbel Kolbe, and Jürgen Rarey. Chemical Thermodynamics for Process Simulation. John Wiley & Sons, 2019. Examples Example 1 This object-oriented class provides access to many more thermodynamic properties than Wilson_gammas, but it can also be used like that function. In the following example, gammas are calculated with both functions. The lambdas cannot be specified in this class; but fixed values can be converted with the log function so that fixed values will be obtained. >>> Wilson_gammas([0.252, 0.748], [[1, 0.154], [0.888, 1]]) [1.881492608717, 1.165577493112] >>> GE = Wilson(T=300.0, xs=[0.252, 0.748], lambda_as=[[0, log(0.154)], [log(0.888), 0]]) >>> GE.gammas() [1.881492608717, 1.165577493112] We can check that the same lambda values were computed as well, and that there is no temperature dependency: >>> GE.lambdas() [[1.0, 0.154], [0.888, 1.0]] >>> GE.dlambdas_dT() [[0.0, 0.0], [0.0, 0.0]] In this case, there is no temperature dependency in the Wilson model as the lambda values are fixed, so the excess enthalpy is always zero. Other properties are not always zero. >>> GE.HE(), GE.CpE() (0.0, 0.0) >>> GE.GE(), GE.SE(), GE.dGE_dT() (683.165839398, -2.277219464, 2.2772194646) Example 2 ChemSep is a (partially) free program for modeling distillation. Besides being a wonderful program, it also ships with a permissive license several sets of binary interaction parameters. The Wilson parameters in it can be accessed from Thermo as follows. In the following case, we compute activity coefficients of the ethanol-water system at mole fractions of [.252, 0.748]. >>> from thermo.interaction_parameters import IPDB >>> CAS1, CAS2 = '64-17-5', '7732-18-5' >>> lambda_as = IPDB.get_ip_asymmetric_matrix(name='ChemSep Wilson', CASs=[CAS1, CAS2], ip='aij') >>> lambda_bs = IPDB.get_ip_asymmetric_matrix(name='ChemSep Wilson', CASs=[CAS1, CAS2], ip='bij') >>> GE = Wilson(T=273.15+70, xs=[.252, .748], lambda_as=lambda_as, lambda_bs=lambda_bs) >>> GE.gammas() [1.95733110, 1.1600677] In ChemSep, the form of the Wilson lambda equation is $\Lambda_{ij} = \frac{V_j}{V_i}\exp\left( \frac{-A_{ij}}{RT}\right)$ The parameters were converted to the form used by Thermo as follows: $a_{ij} = \log\left(\frac{V_j}{V_i}\right)$ $b_{ij} = \frac{-A_{ij}}{R}= \frac{-A_{ij}}{ 1.9872042586408316}$ This system was chosen because there is also a sample problem for the same components from the DDBST which can be found here: http://chemthermo.ddbst.com/Problems_Solutions/Mathcad_Files/P05.01a%20VLE%20Behavior%20of%20Ethanol%20-%20Water%20Using%20Wilson.xps In that example, with different data sets and parameters, they obtain at the same conditions activity coefficients of [1.881, 1.165]. Different sources of parameters for the same system will generally have similar behavior if regressed in the same temperature range. As higher order lambda parameters are added, models become more likely to behave differently. It is recommended in [3] to regress the minimum number of parameters required. Example 3 The DDBST has published some sample problems which are fun to work with. Because the DDBST uses a different equation form for the coefficients than this model implements, we must initialize the Wilson object with a different method. >>> T = 331.42 >>> N = 3 >>> Vs_ddbst = [74.04, 80.67, 40.73] >>> as_ddbst = [[0, 375.2835, 31.1208], [-1722.58, 0, -1140.79], [747.217, 3596.17, 0.0]] >>> bs_ddbst = [[0, -3.78434, -0.67704], [6.405502, 0, 2.59359], [-0.256645, -6.2234, 0]] >>> cs_ddbst = [[0.0, 7.91073e-3, 8.68371e-4], [-7.47788e-3, 0.0, 3.1e-5], [-1.24796e-3, 3e-5, 0.0]] >>> dis = eis = fis = [[0.0]*N for _ in range(N)] >>> params = Wilson.from_DDBST_as_matrix(Vs=Vs_ddbst, ais=as_ddbst, bis=bs_ddbst, cis=cs_ddbst, dis=dis, eis=eis, fis=fis, unit_conversion=False) >>> xs = [0.229, 0.175, 0.596] >>> GE = Wilson(T=T, xs=xs, lambda_as=params[0], lambda_bs=params[1], lambda_cs=params[2], lambda_ds=params[3], lambda_es=params[4], lambda_fs=params[5]) >>> GE Wilson(T=331.42, xs=[0.229, 0.175, 0.596], lambda_as=[[0.0, 3.870101271243586, 0.07939943395502425], [-6.491263271243587, 0.0, -3.276991837288562], [0.8542855660449756, 6.906801837288562, 0.0]], lambda_bs=[[0.0, -375.2835, -31.1208], [1722.58, 0.0, 1140.79], [-747.217, -3596.17, -0.0]], lambda_ds=[[-0.0, -0.00791073, -0.000868371], [0.00747788, -0.0, -3.1e-05], [0.00124796, -3e-05, -0.0]]) >>> GE.GE(), GE.dGE_dT(), GE.d2GE_dT2() (480.2639266306882, 4.355962766232997, -0.029130384525017247) >>> GE.HE(), GE.SE(), GE.dHE_dT(), GE.dSE_dT() (-963.3892533542517, -4.355962766232997, 9.654392039281216, 0.029130384525017247) >>> GE.gammas() [1.2233934334, 1.100945902470, 1.205289928117] The solution given by the DDBST has the same values [1.223, 1.101, 1.205], and can be found here: http://chemthermo.ddbst.com/Problems_Solutions/Mathcad_Files/05.09%20Compare%20Experimental%20VLE%20to%20Wilson%20Equation%20Results.xps Example 4 A simple example is given in [1]; other textbooks sample problems are normally in the same form as this - with only volumes and the a term specified. The system is 2-propanol/water at 353.15 K, and the mole fraction of 2-propanol is 0.25. >>> T = 353.15 >>> N = 2 >>> Vs = [76.92, 18.07] # cm^3/mol >>> ais = [[0.0, 437.98],[1238.0, 0.0]] # cal/mol >>> bis = cis = dis = eis = fis = [[0.0]*N for _ in range(N)] >>> params = Wilson.from_DDBST_as_matrix(Vs=Vs, ais=ais, bis=bis, cis=cis, dis=dis, eis=eis, fis=fis, unit_conversion=True) >>> xs = [0.25, 0.75] >>> GE = Wilson(T=T, xs=xs, lambda_as=params[0], lambda_bs=params[1], lambda_cs=params[2], lambda_ds=params[3], lambda_es=params[4], lambda_fs=params[5]) >>> GE.gammas() [2.124064516, 1.1903745834] The activity coefficients given in [1] are [2.1244, 1.1904]; matching ( with a slight deviation from their use of 1.987 as a gas constant). Attributes Tfloat Temperature, [K] xslist[float] Mole fractions, [-] model_idint Unique identifier for the Wilson activity model, [-] Methods Calculate and return the excess Gibbs energy of a liquid phase represented with the Wilson model. Calculate and return the second temperature derivative of excess Gibbs energy of a liquid phase using the Wilson activity coefficient model. Calculate and return the temperature derivative of mole fraction derivatives of excess Gibbs energy of a liquid represented by the Wilson model. Calculate and return the second mole fraction derivatives of excess Gibbs energy for the Wilson model. Calculate and return the second temperature derivative of the lambda termsfor the Wilson model at the system temperature. Calculate and return the third temperature derivative of excess Gibbs energy of a liquid phase using the Wilson activity coefficient model. Calculate and return the third mole fraction derivatives of excess Gibbs energy using the Wilson model. Calculate and return the third temperature derivative of the lambda terms for the Wilson model at the system temperature. Calculate and return the temperature derivative of excess Gibbs energy of a liquid phase represented by the Wilson model. Calculate and return the mole fraction derivatives of excess Gibbs energy for the Wilson model. Calculate and return the temperature derivative of the lambda terms for the Wilson model at the system temperature. from_DDBST(Vi, Vj, a, b, c[, d, e, f, ...]) Converts parameters for the wilson equation in the DDBST to the basis used in this implementation. from_DDBST_as_matrix(Vs[, ais, bis, cis, ...]) Converts parameters for the wilson equation in the DDBST to the basis used in this implementation. Calculate and return the lambda terms for the Wilson model for at system temperature. to_T_xs(T, xs) Method to construct a new Wilson instance at temperature T, and mole fractions xs with the same parameters as the existing object. GE()[source] Calculate and return the excess Gibbs energy of a liquid phase represented with the Wilson model. $g^E = -RT\sum_i x_i \ln\left(\sum_j x_j \lambda_{i,j} \right)$ Returns GEfloat Excess Gibbs energy of an ideal liquid, [J/mol] d2GE_dT2()[source] Calculate and return the second temperature derivative of excess Gibbs energy of a liquid phase using the Wilson activity coefficient model. $\frac{\partial^2 G^E}{\partial T^2} = -R\left[T\sum_i \left(\frac{x_i \sum_j (x_j \frac{\partial^2 \Lambda_{ij}}{\partial T^2} )}{\sum_j x_j \Lambda_{ij}} - \frac{x_i (\sum_j x_j \frac{\partial \Lambda_{ij}}{\partial T} )^2}{(\sum_j x_j \Lambda_{ij})^2} \right) + 2\sum_i \left(\frac{x_i \sum_j x_j \frac{\partial \Lambda_{ij}}{\partial T}}{\sum_j x_j \Lambda_{ij}} \right) \right]$ Returns d2GE_dT2float Second temperature derivative of excess Gibbs energy, [J/(mol*K^2)] d2GE_dTdxs()[source] Calculate and return the temperature derivative of mole fraction derivatives of excess Gibbs energy of a liquid represented by the Wilson model. $\frac{\partial^2 G^E}{\partial x_k \partial T} = -R\left[T\left( \sum_i \left(\frac{x_i \frac{\partial n_{ik}}{\partial T}}{\sum_j x_j \Lambda_{ij}} - \frac{x_i \Lambda_{ik} (\sum_j x_j \frac{\partial \Lambda_{ij}}{\partial T} )}{(\partial_j x_j \Lambda_{ij})^2} \right) + \frac{\sum_i x_i \frac{\partial \Lambda_{ki}}{\partial T}}{\sum_j x_j \Lambda_{kj}} \right) + \ln\left(\sum_i x_i \Lambda_{ki}\right) + \sum_i \frac{x_i \Lambda_{ik}}{\sum_j x_j \Lambda_{ij}} \right]$ Returns d2GE_dTdxslist[float] Temperature derivative of mole fraction derivatives of excess Gibbs energy, [J/mol/K] d2GE_dxixjs()[source] Calculate and return the second mole fraction derivatives of excess Gibbs energy for the Wilson model. $\frac{\partial^2 G^E}{\partial x_k \partial x_m} = RT\left( \sum_i \frac{x_i \Lambda_{ik} \Lambda_{im}}{(\sum_j x_j \Lambda_{ij})^2} -\frac{\Lambda_{km}}{\sum_j x_j \Lambda_{kj}} -\frac{\Lambda_{mk}}{\sum_j x_j \Lambda_{mj}} \right)$ Returns d2GE_dxixjslist[list[float]] Second mole fraction derivatives of excess Gibbs energy, [J/mol] d2lambdas_dT2()[source] Calculate and return the second temperature derivative of the lambda termsfor the Wilson model at the system temperature. $\frac{\partial^2 \Lambda_{ij}}{\partial^2 T} = \left(2 f_{ij} + \left(2 T f_{ij} + d_{ij} + \frac{c_{ij}}{T} - \frac{b_{ij}}{T^{2}} - \frac{2 e_{ij}}{T^{3}}\right)^{2} - \frac{c_{ij}}{T^{2}} + \frac{2 b_{ij}}{T^{3}} + \frac{6 e_{ij}}{T^{4}}\right) e^{T^{2} f_{ij} + T d_{ij} + a_{ij} + c_{ij} \ln{\left(T \right)} + \frac{b_{ij}}{T} + \frac{e_{ij}}{T^{2}}}$ Returns d2lambdas_dT2list[list[float]] Second temperature deriavtives of Lambda terms, asymmetric matrix, [1/K^2] Notes These Lambda ij values (and the coefficients) are NOT symmetric. d3GE_dT3()[source] Calculate and return the third temperature derivative of excess Gibbs energy of a liquid phase using the Wilson activity coefficient model. $\frac{\partial^3 G^E}{\partial T^3} = -R\left[3\left(\frac{x_i \sum_j (x_j \frac{\partial^2 \Lambda_{ij}}{\partial T^2} )}{\sum_j x_j \Lambda_{ij}} - \frac{x_i (\sum_j x_j \frac{\partial \Lambda_{ij}}{\partial T} )^2}{(\sum_j x_j \Lambda_{ij})^2} \right) +T\left( \sum_i \frac{x_i (\sum_j x_j \frac{\partial^3 \Lambda _{ij}}{\partial T^3})}{\sum_j x_j \Lambda_{ij}} - \frac{3x_i (\sum_j x_j \frac{\partial \Lambda_{ij}^2}{\partial T^2}) (\sum_j x_j \frac{\partial \Lambda_{ij}}{\partial T})}{(\sum_j x_j \Lambda_{ij})^2} + 2\frac{x_i(\sum_j x_j \frac{\partial \Lambda_{ij}}{\partial T})^3}{(\sum_j x_j \Lambda_{ij})^3} \right)\right]$ Returns d3GE_dT3float Third temperature derivative of excess Gibbs energy, [J/(mol*K^3)] d3GE_dxixjxks()[source] Calculate and return the third mole fraction derivatives of excess Gibbs energy using the Wilson model. $\frac{\partial^3 G^E}{\partial x_k \partial x_m \partial x_n} = -RT\left[ \sum_i \left(\frac{2x_i \Lambda_{ik}\Lambda_{im}\Lambda_{in}} {(\sum x_j \Lambda_{ij})^3}\right) - \frac{\Lambda_{km} \Lambda_{kn}}{(\sum_j x_j \Lambda_{kj})^2} - \frac{\Lambda_{mk} \Lambda_{mn}}{(\sum_j x_j \Lambda_{mj})^2} - \frac{\Lambda_{nk} \Lambda_{nm}}{(\sum_j x_j \Lambda_{nj})^2} \right]$ Returns d3GE_dxixjxkslist[list[list[float]]] Third mole fraction derivatives of excess Gibbs energy, [J/mol] d3lambdas_dT3()[source] Calculate and return the third temperature derivative of the lambda terms for the Wilson model at the system temperature. $\frac{\partial^3 \Lambda_{ij}}{\partial^3 T} = \left(3 \left(2 f_{ij} - \frac{c_{ij}}{T^{2}} + \frac{2 b_{ij}}{T^{3}} + \frac{6 e_{ij}}{T^{4}}\right) \left(2 T f_{ij} + d_{ij} + \frac{c_{ij}}{T} - \frac{b_{ij}}{T^{2}} - \frac{2 e_{ij}}{T^{3}}\right) + \left(2 T f_{ij} + d_{ij} + \frac{c_{ij}}{T} - \frac{b_{ij}}{T^{2}} - \frac{2 e_{ij}}{T^{3}}\right)^{3} - \frac{2 \left(- c_{ij} + \frac{3 b_{ij}}{T} + \frac{12 e_{ij}}{T^{2}}\right)}{T^{3}}\right) e^{T^{2} f_{ij} + T d_{ij} + a_{ij} + c_{ij} \ln{\left(T \right)} + \frac{b_{ij}}{T} + \frac{e_{ij}}{T^{2}}}$ Returns d3lambdas_dT3list[list[float]] Third temperature deriavtives of Lambda terms, asymmetric matrix, [1/K^3] Notes These Lambda ij values (and the coefficients) are NOT symmetric. dGE_dT()[source] Calculate and return the temperature derivative of excess Gibbs energy of a liquid phase represented by the Wilson model. $\frac{\partial G^E}{\partial T} = -R\sum_i x_i \ln\left(\sum_j x_i \Lambda_{ij}\right) -RT\sum_i \frac{x_i \sum_j x_j \frac{\Lambda _{ij}}{\partial T}}{\sum_j x_j \Lambda_{ij}}$ Returns dGE_dTfloat First temperature derivative of excess Gibbs energy of a liquid phase represented by the Wilson model, [J/(mol*K)] dGE_dxs()[source] Calculate and return the mole fraction derivatives of excess Gibbs energy for the Wilson model. $\frac{\partial G^E}{\partial x_k} = -RT\left[ \sum_i \frac{x_i \Lambda_{ik}}{\sum_j \Lambda_{ij}x_j } + \ln\left(\sum_j x_j \Lambda_{kj}\right) \right]$ Returns dGE_dxslist[float] Mole fraction derivatives of excess Gibbs energy, [J/mol] dlambdas_dT()[source] Calculate and return the temperature derivative of the lambda terms for the Wilson model at the system temperature. $\frac{\partial \Lambda_{ij}}{\partial T} = \left(2 T h_{ij} + d_{ij} + \frac{c_{ij}}{T} - \frac{b_{ij}}{T^{2}} - \frac{2 e_{ij}}{T^{3}}\right) e^{T^{2} h_{ij} + T d_{ij} + a_{ij} + c_{ij} \ln{\left(T \right)} + \frac{b_{ij}}{T} + \frac{e_{ij}}{T^{2}}}$ Returns dlambdas_dTlist[list[float]] Temperature deriavtives of Lambda terms, asymmetric matrix [1/K] Notes These Lambda ij values (and the coefficients) are NOT symmetric. static from_DDBST(Vi, Vj, a, b, c, d=0.0, e=0.0, f=0.0, unit_conversion=True)[source] Converts parameters for the wilson equation in the DDBST to the basis used in this implementation. $\Lambda_{ij} = \frac{V_j}{V_i}\exp\left(\frac{-\Delta \lambda_{ij}}{RT} \right)$ $\Delta \lambda_{ij} = a_{ij} + b_{ij}T + c T^2 + d_{ij}T\ln T + e_{ij}T^3 + f_{ij}/T$ Parameters Vifloat Molar volume of component i; needs only to be in the same units as Vj, [cm^3/mol] Vjfloat Molar volume of component j; needs only to be in the same units as Vi, [cm^3/mol] afloat a parameter in DDBST form, [K] bfloat b parameter in DDBST form, [-] cfloat c parameter in DDBST form, [1/K] dfloat, optional d parameter in DDBST form, [-] efloat, optional e parameter in DDBST form, [1/K^2] ffloat, optional f parameter in DDBST form, [K^2] unit_conversionbool If True, the input coefficients are in units of cal/K/mol, and a R gas constant of 1.9872042… is used for the conversion; the DDBST uses this generally, [-] Returns afloat a parameter in Wilson form, [-] bfloat b parameter in Wilson form, [K] cfloat c parameter in Wilson form, [-] dfloat d parameter in Wilson form, [1/K] efloat e parameter in Wilson form, [K^2] ffloat f parameter in Wilson form, [1/K^2] Notes The units show how the different variables are related to each other. Examples >>> Wilson.from_DDBST(Vi=74.04, Vj=80.67, a=375.2835, b=-3.78434, c=0.00791073, d=0.0, e=0.0, f=0.0, unit_conversion=False) (3.8701012712, -375.2835, -0.0, -0.00791073, -0.0, -0.0) static from_DDBST_as_matrix(Vs, ais=None, bis=None, cis=None, dis=None, eis=None, fis=None, unit_conversion=True)[source] Converts parameters for the wilson equation in the DDBST to the basis used in this implementation. Matrix wrapper around Wilson.from_DDBST. Parameters Vslist[float] Molar volume of component; needs only to be in consistent units, [cm^3/mol] aislist[list[float]] a parameters in DDBST form, [K] bislist[list[float]] b parameters in DDBST form, [-] cislist[list[float]] c parameters in DDBST form, [1/K] dislist[list[float]], optional d parameters in DDBST form, [-] eislist[list[float]], optional e parameters in DDBST form, [1/K^2] fislist[list[float]], optional f parameters in DDBST form, [K^2] unit_conversionbool If True, the input coefficients are in units of cal/K/mol, and a R gas constant of 1.9872042… is used for the conversion; the DDBST uses this generally, [-] Returns alist[list[float]] a parameters in Wilson form, [-] blist[list[float]] b parameters in Wilson form, [K] clist[list[float]] c parameters in Wilson form, [-] dlist[list[float]] d paraemeters in Wilson form, [1/K] elist[list[float]] e parameters in Wilson form, [K^2] flist[list[float]] f parameters in Wilson form, [1/K^2] lambdas()[source] Calculate and return the lambda terms for the Wilson model for at system temperature. $\Lambda_{ij} = \exp\left[a_{ij}+\frac{b_{ij}}{T}+c_{ij}\ln T + d_{ij}T + \frac{e_{ij}}{T^2} + f_{ij}{T^2}\right]$ Returns lambdaslist[list[float]] Lambda terms, asymmetric matrix [-] Notes These Lambda ij values (and the coefficients) are NOT symmetric. to_T_xs(T, xs)[source] Method to construct a new Wilson instance at temperature T, and mole fractions xs with the same parameters as the existing object. Parameters Tfloat Temperature, [K] xslist[float] Mole fractions of each component, [-] Returns objWilson New Wilson object at the specified conditions [-] Notes If the new temperature is the same temperature as the existing temperature, if the lambda terms or their derivatives have been calculated, they will be set to the new object as well. ## Wilson Functional Calculations¶ thermo.wilson.Wilson_gammas(xs, params)[source] Calculates the activity coefficients of each species in a mixture using the Wilson method, given their mole fractions, and dimensionless interaction parameters. Those are normally correlated with temperature, and need to be calculated separately. $\ln \gamma_i = 1 - \ln \left(\sum_j^N \Lambda_{ij} x_j\right) -\sum_j^N \frac{\Lambda_{ji}x_j}{\displaystyle\sum_k^N \Lambda_{jk}x_k}$ Parameters xslist[float] Liquid mole fractions of each species, [-] paramslist[list[float]] Dimensionless interaction parameters of each compound with each other, [-] Returns gammaslist[float] Activity coefficient for each species in the liquid mixture, [-] Notes This model needs N^2 parameters. The original model correlated the interaction parameters using the standard pure-component molar volumes of each species at 25°C, in the following form: $\Lambda_{ij} = \frac{V_j}{V_i} \exp\left(\frac{-\lambda_{i,j}}{RT}\right)$ If a compound is not liquid at that temperature, the liquid volume is taken at the saturated pressure; and if the component is supercritical, its liquid molar volume should be extrapolated to 25°C. However, that form has less flexibility and offered no advantage over using only regressed parameters. Most correlations for the interaction parameters include some of the terms shown in the following form: $\ln \Lambda_{ij} =a_{ij}+\frac{b_{ij}}{T}+c_{ij}\ln T + d_{ij}T + \frac{e_{ij}}{T^2} + h_{ij}{T^2}$ The Wilson model is not applicable to liquid-liquid systems. For this model to produce ideal acitivty coefficients (gammas = 1), all interaction parameters should be 1. The specific process simulator implementations are as follows: References 1 Wilson, Grant M. “Vapor-Liquid Equilibrium. XI. A New Expression for the Excess Free Energy of Mixing.” Journal of the American Chemical Society 86, no. 2 (January 1, 1964): 127-130. doi:10.1021/ja01056a002. 2 Gmehling, Jurgen, Barbel Kolbe, Michael Kleiber, and Jurgen Rarey. Chemical Thermodynamics for Process Simulation. 1st edition. Weinheim: Wiley-VCH, 2012. Examples Ethanol-water example, at 343.15 K and 1 MPa, from [2] also posted online http://chemthermo.ddbst.com/Problems_Solutions/Mathcad_Files/P05.01a%20VLE%20Behavior%20of%20Ethanol%20-%20Water%20Using%20Wilson.xps : >>> Wilson_gammas([0.252, 0.748], [[1, 0.154], [0.888, 1]]) [1.881492608717, 1.165577493112] ## Wilson Regression Calculations¶ thermo.wilson.wilson_gammas_binaries(xs, lambda12, lambda21, calc=None)[source] Calculates activity coefficients at fixed lambda values for a binary system at a series of mole fractions. This is used for regression of lambda parameters. This function is highly optimized, and operates on multiple points at a time. $\ln \gamma_1 = -\ln(x_1 + \Lambda_{12}x_2) + x_2\left( \frac{\Lambda_{12}}{x_1 + \Lambda_{12}x_2} - \frac{\Lambda_{21}}{x_2 + \Lambda_{21}x_1} \right)$ $\ln \gamma_2 = -\ln(x_2 + \Lambda_{21}x_1) - x_1\left( \frac{\Lambda_{12}}{x_1 + \Lambda_{12}x_2} - \frac{\Lambda_{21}}{x_2 + \Lambda_{21}x_1} \right)$ Parameters xslist[float] Liquid mole fractions of each species in the format x0_0, x1_0, (component 1 point1, component 2 point 1), x0_1, x1_1, (component 1 point2, component 2 point 2), … [-] lambda12float lambda parameter for 12, [-] lambda21float lambda parameter for 21, [-] gammaslist[float], optional Array to store the activity coefficient for each species in the liquid mixture, indexed the same as xs; can be omitted or provided for slightly better performance [-] Returns gammaslist[float] Activity coefficient for each species in the liquid mixture, indexed the same as xs, [-] Notes The lambda values are hard-coded to replace values under zero which are mathematically impossible, with a very small number. This is helpful for regression which might try to make those values negative. Examples >>> wilson_gammas_binaries([.1, .9, 0.3, 0.7, .85, .15], 0.1759, 0.7991) [3.42989, 1.03432, 1.74338, 1.21234, 1.01766, 2.30656]
2022-08-10 20:19:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6650121212005615, "perplexity": 6822.8632092512225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571210.98/warc/CC-MAIN-20220810191850-20220810221850-00780.warc.gz"}
https://www.theochem.ru.nl/~pwormer/Knowino/knowino.org/w/index404a.html?title=Euclidean_algorithm&oldid=10904
# Euclidean algorithm (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) In mathematics, the Euclidean algorithm, or Euclid's algorithm, named after the ancient Greek geometer and number-theorist Euclid, is an algorithm for finding the greatest common divisor (gcd) of two integers. The algorithm does not require prime factorizations and runs efficiently even when methods using prime factorizations do not. ## The algorithm ### Simple but slow version The algorithm is based on this simple fact: If d is a divisor of both m and n, then d is a divisor of m − n. Thus, for example, any divisor shared in common by both 1989 and 867 must also be a divisor of 1989 − 867 = 1122. This reduces the problem of finding gcd(1989, 867) to the problem of finding gcd(1122, 867). This reduction to smaller integers is iterated as many times as possible. Since one cannot go on getting smaller and smaller positive integers forever, one must reach a point where one of the two is 0. But one can get 0 when subtracting two integers only if the two integers are equal. Therefore, one must reach a point where the two are equal. At that point, the problem of the gcd becomes trivial. Thus: gcd(1989, 867) = gcd(1989 − 867, 867) = gcd(1122, 867) = gcd(1122 − 867, 867) = gcd(255, 867) = gcd(255, 867 − 255) = gcd(255, 612) = gcd(255, 612 − 255) = gcd(255, 357) = gcd(255, 357 − 255) = gcd(255, 102) = gcd(255 − 102, 102) = gcd(153, 102) = gcd(153 − 102, 102) = gcd(51, 102) = gcd(51, 102 − 51) = gcd(51, 51) = 51. Thus the largest integer that is a divisor of both 1989 and 867 is 51. One use of this fact is in reducing the fraction 1989/867 to lowest terms: $\frac{1989}{867} = \frac{51 \times 39}{51 \times 17} =\frac{39}{17}.$ ### Efficient version In the example above, successive subtraction of 867 from the larger of the two numbers whose gcd was sought led to the remainder on division of the larger number, 1989, by the smaller, 867. Thus the algorithm may be stated: • Replace the larger of the two numbers by the remainder on division of the larger one by the smaller one. • Repeat until one of the two numbers is 0. The gcd is the other number. The "simple but slow" version was presented only to show the simplicity of the underlying idea. ## Example It is desired to find the gcd of 357765 and 110959. to lowest terms. We have gcd(357765, 110959) = gcd(24888, 110959) because 24888 is the remainder when 357765 is divided by 110959. Then gcd(24888, 110959) = gcd(24888, 11407) because 11407 is the remainder when 110959 is divided by 24888. Then gcd(24888, 11407) = gcd(2074, 11407) because 2074 is the remainder when 24888 is divided by 11407. Then gcd(2074, 11407) = gcd(2074, 1037) because 1037 is the remainder when 11407 is divided by 2074. Then gcd(2074, 1037) = gcd(0, 1037) because 0 is the remainder when 2074 is divided by 1037. No further reduction is possible, and the gcd is 1037. ### Two applications #### Reducing a fraction to lowest terms It is desired to reduce $\frac{357765}{110959}$ to lowest terms. We have $\frac{357765}{110959} = \frac{1037 \times 345}{1037 \times 107} = \frac{345}{107}.$ #### Finding a common denominator It is desired to find the exact value of $\frac{6}{357765} + \frac{5}{110959}$ (not a decimal approximation, such as any conventional calculator would give). We have $\frac{6}{1037\times 345} + \frac{5}{1037 \times 107} = \frac{6 \times 107}{1037\times 345 \times 107} + \frac{5 \times 345}{1037 \times 107 \times 345} = \frac{642}{38280855} + \frac{1725}{38280855}$ etc. The common denominator 38280855 is the least common multiple of the two denominators 357765 and 110959, and is much smaller than what would have resulted from multiplying the two denominators. ## Solution of linear Diophantine equations In an example above we found the gcd of 357765 and 110959 to be 1037. In number theory it is of some interest that this entails that the Diophantine equation $357765x + 110959y = 1037 \,$ can be solved in integers x and y. Generally if gcd(ab) = c, then the equation $ax + by = c\,$ has a solution a pair (xy) of integers. Moreover, if we make use of the quotients, rather than only the remainders, in the divisions we did while executing Euclid's algorithm, we can find x and y. Here is how: $\begin{matrix} 357765 & - & 110959 & \cdot & 3 & = & 24888 & {}\qquad(\mbox{quotient} = 3, \mbox{ remainder} = 24888) & {}\qquad (1) \\ 110959 & - & 24888 & \cdot & 4 & = & 11407 & {}\qquad(\mbox{quotient} = 4, \mbox{ remainder} = 11407) & {} \qquad (2) \\ 24888 & - & 11407 & \cdot & 2 & = & 2074 & {}\qquad(\mbox{quotient} = 2, \mbox{ remainder} = 2074) & {} \qquad (3) \\ 11407 & - & 2074 & \cdot & 5 & = & 1037 & {}\qquad(\mbox{quotient} = 5, \mbox{ remainder} = 1037) & {} \qquad (4) \\ 2074 & - & 1037 & \cdot & 2 & = & 0 & {}\qquad(\mbox{quotient} = 2, \mbox{ remainder} = 0) & {} \qquad (5) \end{matrix}$ As above, when we get 0 as a remainder, we know that the last remainder preceding it, 1037, is the gcd. Now we want to write the gcd, 1037, as a linear combination of the two numbers we started with, in which the coefficients x and y are integers. We start with the information on line (4) above: $1037 = [11407] - [2074] \cdot 5. \,$ This gives 1037 as a linear combination of the two numbers in square brackets, which were among the successive remainders. First, we replace the smaller of those two remainders with what line (3) tells us it is equal to: $1037 = [11407] - ([24888] - [11407]\cdot 2) \cdot 5 \,$ $= [11407]\cdot 11 - [24888]\cdot 5. \,$ This gives us 1037 as a linear combination of the two remainders that appear in line (3) above. Then, we replace the smaller of those two remainders with what line (2) tells us it is equal to: $1037 = ([110959] - [24888]\cdot 4)\cdot 11 - [24888] \cdot 5 \,$ $= [110959]\cdot 11 - [24888]\cdot 49. \,$ This gives us 1037 as a linear combination of the two remainders that appear in line (2) above. Then, we replace the smaller of those two remainders with what line (1) tells us it is equal to: $1037 = [110959]\cdot 11 - ([357765] - [110959]\cdot 3)\cdot 49 \,$ $= [110959]\cdot 158 - [357765] \cdot 49. \,$ This gives us 1037 as a linear combination of the two numbers we started with, 357765 and 110959. Thus x = −49 and y = 158 is a solution.
2023-01-31 00:39:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.79599928855896, "perplexity": 444.63345866606704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499831.97/warc/CC-MAIN-20230130232547-20230131022547-00610.warc.gz"}
https://samacheerguru.com/samacheer-kalvi-6th-maths-term-1-chapter-3-ex-3-2/
# Samacheer Kalvi 6th Maths Solutions Term 1 Chapter 3 Ratio and Proportion Ex 3.2 ## Tamilnadu Samacheer Kalvi 6th Maths Solutions Term 1 Chapter 3 Ratio and Proportion Ex 3.2 Question 1. Fill in the blanks of the given equivalent ratios. (i) 3 : 5 = 9 : ___ (ii) 4 : 5 = ___ : 10 (iii) 6 : ____ = 1 : 2 Solution: (i) 15 Hint: $$\frac{3}{5}=\frac{3 \times 3}{5 \times 3}=\frac{9}{15}$$ (ii) 8 Hint: $$\frac{4}{5}=\frac{4 \times 2}{5 \times 2}=\frac{8}{10}$$ (iii) 12 Hint: $$\frac{1}{2}=\frac{1 \times 6}{2 \times 6}=\frac{6}{12}$$ Question 2. Complete the Table: Solution: Question 3. Say True or False. (i) 5 : 7 is equivalent to 21 : 15. (ii) If 40 is divided in the ratio 3 : 2, then the larger part is 24. Solution: (i) False (ii) True Question 4. Give two equivalent ratios for each of the following. (i) 3 : 2 (ii) 1 : 6 (iii) 5 : 4 Solution: Question 5. Which of the two ratios is larger? (i) 4 : 5 or 8 : 15 (ii) 3 : 4 or 7 : 8 (iii) 1 : 2 or 2 : 1 Solution: Question 6. Divide the numbers given below in the required ratio. (i) 20 in the ratio 3 : 2 (ii) 27 in the ratio 4 : 5 (iii) 40 in the ratio 6 : 14 Solution: (i) Ratio = 3 : 2 Sum of the ratio = 3 + 2 = 5 5 parts = 20 1 part = $$\frac{20}{5}$$ = 4 3 parts = 3 × 4 = 12 2 parts = 2 × 4 = 8 20 can be divided in the form as 12, 8. (ii) Ratio = 4 : 5 Sum of the ratio = 4 + 5 = 9 9 parts = 27 1 part = $$\frac{27}{9}$$ = 3 4 parts = 4 × 3 = 12 5 parts = 5 × 3 =15 27 can be divided in the form as 12, 15. (iii) 40 in the ratio 6 : 14 Ratio = 6 : 14 Sum of the ratio = 6 + 14 = 20 20 parts = 40 1 part = $$\frac{40}{20}$$ = 2 6 parts = 2 × 6 = 12 14 parts = 2 × 14 = 28 40 can be divided in the form as 12, 28. Question 7. In a family, the amount spent in a month for buying Provisions and Vegetables are in the ratio 3 : 2. If the allotted amount is ₹ 4000, then what will be the amount spent for (i) Provisions and (ii) Vegetables? Solution: Dividing the total amount ₹ 4000 into 3 + 2 = 5 equal parts then (i) For Provisions: 3 out of 5 parts are spent for provisions and 2 out of 5 parts for vegetables. $$4000 \times \frac{3}{5}=2400$$ for provisions (ii) For vegetables: $$4000 \times \frac{2}{5}=1600$$ for Vegetables. ₹ 2400 spend on provisions and ₹ 1600 spend on Vegetables. Question 8. A line segment 63 cm long is to be divided into two parts in the ratio 3 : 4. Find the length of each part. Solution: Total length = 63 cm Ratio = 3 : 4 Sum of the ratio = 3 + 4 = 7 7 parts = 63 cm 1 part = $$\frac{63}{7}$$ = 9 cm 3 parts = 3 × 9 cm = 27 cm 4 parts = 4 × 9 cm = 36 cm ∴ 63 cm can be divided into the parts as 27 cm and 36 cm. Objective Type Questions Question 9. If 2 : 3 and 4 : ___ or equivalent ratios, then the missing term is ____ (a) 6 (b) 2 (c) 4 (d) 3 Solution: (a) 6 Hint: $$\frac{2}{3}=\frac{2 \times 2}{3 \times 2}=\frac{4}{6}$$ Question 10. An equivalent ratio of 4 : 7 is (a) 1 : 3 (b) 8 : 15 (c) 14 : 8 (d) 12 : 21 Solution: (d) 12 : 21 Question 11. Which is not an equivalent ratio of $$\frac{16}{24}$$ ? (a) $$\frac{6}{9}$$ (b) $$\frac{12}{18}$$ (c) $$\frac{10}{15}$$ (d) $$\frac{20}{28}$$ Solution: (d) $$\frac{20}{28}$$ Hint: $$\frac{16}{24}=\frac{8 \times 2}{8 \times 3}=\frac{2}{3}$$ Question 12. If Rs 1600 is divided (a) Rs 480 (b) Rs 800 (c) Rs 1000 (d) Rs 200 Solution: (c) Rs 1000
2023-03-28 17:45:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6358539462089539, "perplexity": 1674.055262721485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948868.90/warc/CC-MAIN-20230328170730-20230328200730-00501.warc.gz"}
https://deepai.org/publication/discrete-geodesic-calculus-in-the-space-of-viscous-fluidic-objects
# Discrete geodesic calculus in the space of viscous fluidic objects Based on a local approximation of the Riemannian distance on a manifold by a computationally cheap dissimilarity measure, a time discrete geodesic calculus is developed, and applications to shape space are explored. The dissimilarity measure is derived from a deformation energy whose Hessian reproduces the underlying Riemannian metric, and it is used to define length and energy of discrete paths in shape space. The notion of discrete geodesics defined as energy minimizing paths gives rise to a discrete logarithmic map, a variational definition of a discrete exponential map, and a time discrete parallel transport. This new concept is applied to a shape space in which shapes are considered as boundary contours of physical objects consisting of viscous material. The flexibility and computational efficiency of the approach is demonstrated for topology preserving shape morphing, the representation of paths in shape space via local shape variations as path generators, shape extrapolation via discrete geodesic flow, and the transfer of geometric features. ## Authors • 14 publications • 10 publications 10/28/2019 ### Image Morphing in Deep Feature Spaces: Theory and Applications This paper combines image metamorphosis with deep features. To this end,... 12/16/2019 ### Consistent Curvature Approximation on Riemannian Shape Spaces We describe how to approximate the Riemann curvature tensor as well as s... 02/13/2016 ### Manifolds of Projective Shapes The projective shape of a configuration of k points or "landmarks" in RP... 08/21/2021 ### ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators This paper introduces an unsupervised loss for training parametric defor... 02/16/2021 ### ResNet-LDDMM: Advancing the LDDMM Framework Using Deep Residual Networks In deformable registration, the geometric framework - large deformation ... 04/08/2016 ### On the Hessian of Shape Matching Energy In this technical report we derive the analytic form of the Hessian matr... 06/27/2019 ### Geodesic analysis in Kendall's shape space with epidemiological applications We analytically determine Jacobi fields and parallel transports and comp... ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## 1 Introduction Geodesic paths in shape space allow to define smooth and in some sense geometrically or physically natural connecting paths , , between two given shapes , or they enable the extrapolation of a path from an initial shape and an initial shape variation which encodes the path direction. Applications include shape modeling in computer vision [17, 16], computational anatomy, where the morphing path establishes correspondences between a patient and a template [2, 26], shape clustering based on Riemannian distances [32], as well as shape statistics [9, 13], where geodesic paths in shape space transport information from the observed shapes into a common reference frame in which statistics can be performed. As locally length minimizing paths, geodesic paths require to endow the space of shapes with a Riemannian metric which encodes the preferred shape variations. There is a rich diversity of Riemannian shape spaces in the literature. Kilian et al. compute isometry invariant geodesics between consistently triangulated surfaces [16], where the Riemannian metric measures stretching of triangle edges, while the metric by Liu et al. also takes into account directional changes of edges [22]. For planar curves, different Riemannian metrics have been devised, including the -metric on direction and curvature functions [17], the -metric on stretching and bending variations [31], as well as curvature-weighted - or Sobolev-type metrics [25, 34], some of which allow closed-form geodesics [37, 33]. A variational approach to the computation of geodesics in the space of planar Jordan curves has been proposed by Schmidt et al. in [30]. The extrapolation of geodesics in the space of curves incorporating translational, rotational, and scale invariance has been investigated by Mennucci et al. [24]. A Riemannian space of non-planar elastic curves has very recently been proposed by Srivastava et al. [32]. In the above approaches, the shape space is often identified with a so-called pre-shape space of curve parameterizations over a 1D domain (or special representations thereof) modulo the action of the reparameterization group. It is essential that the metric on the pre-shape space is invariant under reparameterization or equivalently that reparameterization represents an isometry in the pre-shape space so that the Riemannian metric can be inherited by the shape space. Such reparameterization-invariant metrics can also be defined on the space of parameterized 2D surfaces [1, 20]. For certain representations of the parameterization one is lead to a very simple form of the metric, e.g. an -type metric [19]. The issue of reparameterization invariance does not occur when the mathematical description of the shape space is not based on parameterizations, which often simplifies the analysis (and is also the approach taken here). When warping objects in , a shape tube in is formed. Zolésio investigates geodesic in terms of shortest shape tubes [39]. The space of sufficiently smooth domains can be assigned a Riemannian metric by identifying the tangent space at with velocity fields and defining a metric on these. Dupuis et al. employ a metric G(v,v)=∫DLv⋅vdx for a higher order elliptic operator on some computational domain [7], ensuring a diffeomorphism property of geodesic paths. A corresponding geodesic shooting method has been implemented in [3]. Fuchs et al. propose a viscous-fluid based Riemannian metric [12]. Fletcher and Whitaker employ a similar metric on pullbacks of velocity fields onto a reference shape [10]. Miller and Younes consider the space of registered images as the product space of the Lie group of diffeomorphisms and image maps. They define a Riemannian metric using sufficiently regular elliptic operators on the diffeomorphism-generating velocity fields, which may also depend on the current image [27]. A morphing approach based on the concept of optimal mass transport has been proposed by Haker et al. [14, 38]. An image or a shape is viewed as mass density, and for two such densities the Monge–Kantorovich functional ∫D|ψ(x)−x|2ρ0(x)dx is minimized over all mass preserving mappings , i.e. mappings with . A morphing path then is given by for , . Like for our approach there is a continuum-mechanical interpretation of minimizing the action of an incompressible fluid flow [4], however, the flow typically does neither preserve local shape features or isometries nor the shape topology. Very often, geodesics in shape space are approached via the underlying geodesic evolution equation, and geodesics between two shapes are computed by solving this ODE within a shooting method framework [17, 3, 1]. An alternative approach exploits the energy-minimizing property of geodesics: Schmidt et al. perform a Gauß-Seidel type fixed-point iteration which can be interpreted as a gradient descent on the path energy, and Srivastava et al. derive the equations of a gradient flow for the path energy which they then discretize [32]. In contrast, we employ an inherently variational formulation where geodesics are defined as minimizers of a time discrete path energy. Discrete geodesics are then defined consistently as minimizers of a corresponding discrete energy. In this paper we start from this time discretization and consistently develop a time discrete geodesic calculus in shape space. The resulting variational discretization of the basic Riemannian calculus consists of an exponential map, a logarithmic map, parallel transport, and finally an underlying discrete connection. To this end, we replace the exact, computationally expensive Riemannian distance by a relatively cheap but consistent dissimilarity measure. Our choice of the dissimilarity measure not only ensures consistency for vanishing time step size but also a good representation of shape space geometry already for coarse time steps. For example, rigid body motion invariance is naturally incorporated in this approach. We illustrate this approach on a shape space consisting of homeomorphic viscous-fluid objects and a corresponding deformation-based dissimilarity measure. Different from most approaches, which first discretize in space via the choice of a parameterization, a set of control points, or a mesh, and then solve the resulting transport equations by suitable solvers for ordinary differential equations (see the discussion above), our time discretization is defined on the usually infinite dimensional shape space. It results from a consistent transfer of time continuous to time discrete variational principles. Thereby, it leads to a collection of variational problems on the shape space, which in our concrete implementation of the proposed calculus consists of non-parameterized, volumetric objects. Let us also already mention a further remarkable conceptual difference. The way the time discrete geodesic calculus is introduced differs substantially from the way the time continuous counterpart is usually developed. In classical Riemannian differential geometry one first defines a connection for two vector fields and on a manifold . With the connection at hand a tangent vector can be transported parallel along a path with motion field solving . Studying those paths where the motion field itself is transported parallel along the path (i.e. it solves the ODE ) one is led to geodesics. Next, the exponential map is introduced via the solution of the above ODE for varying initial velocity. Finally, the logarithm is obtained as the (local) inverse of the exponential map. In the time discrete calculus we start with a time discrete formulation of path length and energy and then define discrete geodesics as minimizers of the discrete energy. Evaluating the initial step of a discrete geodesic path as the discrete counterpart of the initial velocity we are led to the discrete logarithm. Then, the discrete exponential map is defined as the inverse of the discrete logarithm. Next, discrete logarithm and discrete exponential allow to define a discrete parallel transport based on the construction of a sequence of approximate Riemannian parallelograms (commonly known as Schild’s ladder [8]). Finally, with the discrete parallel transport at hand, a time discrete connection can be defined. Let us note that the approximation of parallel transport in shape space via Schild’s ladder has also been used in the context of the earlier mentioned flow of diffeomorphism approach [29, 23]. In our discrete framework, however, the notion of discrete parallel transport is directly derived from the parallelogram construction, consistently with the overall discrete approach to geodesics. A related approach for time discrete geodesics has been presented in an earlier paper by Wirth et al. [36]. In contrast to [36], we here do not restrict ourselves to the computation of geodesic paths between two shapes but devise a full-fledged theory of discrete geodesic calculus (cf. Figure 1 ). Furthermore, different from that approach we ensure topological consistency and describe shapes solely via deformations of reference objects instead of treating deformations and level set representations of shapes simultaneously as degrees of freedom, which in turn strongly simplifies the minimization procedure. The paper is organized as follows. In Section 2 we introduce a special model for a shape space, the space of viscous fluidic objects, to which we restrict our exposition of the geodesic calculus. Here, in the light of the discrete shape calculus to be developed, we will review the notion of discrete path length and discrete path energy. After these preliminaries the actual time discrete calculus consisting of a discrete logarithm, a discrete exponential and a discrete parallel transport together with a discrete connection is introduced and discussed in Section 3. Then, Section 4 is devoted to the numerical discretization via characteristic functions and a parameterization via deformations over reference paths. Finally, we draw conclusions in Section 5. ## 2 A space of volumetric objects and an elastic dissimilarity measure To keep the exposition focused we restrict ourselves to a specific shape model, where shapes are represented by volumetric objects which behave physically like viscous fluids. In fact, the scope of the variational discrete geodesic calculus extends beyond this concrete shape model. We refer to Section 5 for remarks on the application to more general shape spaces. ### 2.1 The space of viscous-fluid objects Let us introduce the space of shapes as the set of all objects which are closed subsets of () and homeomorphic to a given regular reference object , i.e. for an orientation preserving homeomorphism . Furthermore, objects which coincide up to a rigid body motion are identified with each other. A smooth path in this shape space is associated with a smooth family of deformations. To measure the distance between two objects, a Riemannian metric is defined on variations of objects which reflects the internal fluid friction — called dissipation — that occurs during the shape variation. The local temporal rate of dissipation in a fluid depends on the symmetric part of the gradient of the fluid velocity (the antisymmetric remainder reflects infinitesimal rotations), and for an isotropic Newtonian fluid, we obtain the local rate of dissipation diss(∇v)=λ(trϵ[v])2+2μtr(ϵ[v]2), (1) where are material-specific parameters. Given a family of deformations of the reference object , the change of shape along the path can be described by the (Lagrangian) temporal variation or the associated (Eulerian) velocity field v(t)=˙ϕ(t)∘ϕ−1(t) on . Hence, the tangent space to at a shape can be identified with the space of initial velocities for deformation paths with . Here we identify those velocities which lead to the same effective shape variation, i.e. those with the same normal component on , where is the outer normal on . Now, integrating the local rate of dissipation for velocity fields on , we define the Riemannian metric on as the symmetric quadratic form with GO(v,v)=min{~v|~v⋅n=v⋅non∂O}∫Odiss(∇~v(x))dx. (2) For the shape variation along a path described by the Eulerian motion field , path length and energy are defined as L[(O(t))t∈[0,1]] =∫10√GO(t)(v(t),v(t))dt, (3) E[(O(t))t∈[0,1]] =∫10GO(t)(v(t),v(t))dt. (4) Paths which (locally) minimize the energy or equivalently the length are called geodesics (cf. Figure 2). A geodesic thus mimics the energetically optimal way to continuously deform a fluid volume. ### 2.2 Approximating the distance The evaluation of the geodesic distance based on a direct space and time discretization of (2) and (3) turns out to be computationally very demanding (cf. for instance the approaches in [3, 7]). Hence, we use here an efficient and robust time discrete approximation based on an energy functional which locally behaves like the squared Riemannian distance (i.e. the squared length of a connecting geodesic): Given two shapes and , we consider an approximation dist2(O,~O)≈WO[ψ], (5) where is the stored deformation energy of a deformation and is the minimizer of this energy over all such deformations with . Here, is a so-called hyperelastic energy density. In correspondence to our assumption that objects are identical if they coincide up to a rigid body motion, we require to be rigid body motion invariant. Furthermore, we assume the objects to have no preferred material directions so that is in addition isotropic, which altogether leads to for all (cf. [5]). In the undeformed configuration for , energy and stresses (the first derivatives of ) are supposed to vanish so that we require , (where denotes the derivative with respect to the matrix argument). Furthermore, we need as to prohibit material self-penetration, which is linked to the preservation of topology. The approximation property (5) relies on a consistent choice of for the given metric which can be expressed by the relation 12d2dt2WO[ψ(t)]∣∣∣t=0=12∫OD2W(1)(∇v,∇v)dx=∫Odiss(∇v)dx (6) along any object path , , with and velocity field . Using the notion of the Hessian of a function on a manifold as the endomorphism representing its second variation in the metric, we can rephrase this approximation condition more geometrically as 12HessMWO[id]=id with the usual identification of objects and deformations . For the deformation energy density , this condition implies that its Hessian has to satisfy for all . A suitable example is W(A)=μ2tr(ATA)+λ4detA2−(μ+λ2)logdetA−dμ2−λ4. Assume that the energy density satisfies the above-mentioned properties. We observe that the metric is the first non-vanishing term in the Taylor expansion of the squared length of a curve, i.e. (L[(O(t))t∈[0,T]])2=T2GO(0)(v,v)+O(T3) with being the initial tangent vector along a smooth path . Thus, since the Hessian of the energy and the metric are related by (6), we obtain that the second order Taylor expansions of and in coincide and indeed dist2(O,~O)=min{ψ|ψ(O)=~O}WO[ψ]+O(dist3(O,~O)). (7) Here, different from [36] we neither take into account mismatch penalties nor perimeter regularizing functionals for each object , . ### 2.3 Discrete length and discrete energy Now, we are in a position to discretize length and energy of paths in shape space. To this end, we first sample the path at times for (), denote , and obtain the estimates L[(O(t))t∈[0,1]] ≥ ∑Kk=1dist(Ok−1,Ok) E[(O(t))t∈[0,1]] ≥ 1τ∑Kk=1dist2(Ok−1,Ok) for the length and the energy, where equality holds for geodesic paths. Indeed, the first estimate is straightforward, and the application of the Cauchy–Schwarz inequality leads to K∑k=1dist2(Ok−1,Ok) ≤ K∑k=1(∫kτ(k−1)τ√GO(t)(v(t),v(t))dt)2 ≤ K∑k=1τ∫kτ(k−1)τGO(t)(v(t),v(t))dt=τE[(O(t))t∈[0,1]] which implies the second estimate. Together with (7) this motivates the following definition of a discrete path energy and a discrete path length for a discrete path in shape space: L[(O0,…,OK)] =∑Kk=1√WOk−1[ψk], (8) E[(O0,…,OK)] =1τ∑Kk=1WOk−1[ψk], (9) where (cf. also [36]). In fact, (8) and (9) can for general smooth paths even be proven to be first order consistent with the continuous length (3) and energy (4) as . For illustration, if is a two-dimensional manifold embedded in , we can interpret the terms as the stored elastic energies in springs which connect a sequence of points on the manifold through the ambient space. Then the discrete path energy is the total stored elastic energy in this chain of springs. A discrete geodesic (of order ) is now defined as a minimizer of for fixed end points . The discrete geodesic is thus an energetically optimal sequence of deformations from into . In the minimization algorithm to be discussed in Section 4.1 we do not explicitly minimize for the object contours as in [36] but instead for reference deformations defined on fixed reference objects. Figure 3 shows a discrete geodesic in the context of multicomponent objects, which is visually identical to that obtained by the more complex approach in [36]. Here, deformations are considered which map every component of a shape onto the corresponding component of the next shape in the discrete path as the obvious generalization of discrete geodesics between single component shapes. While in the continuous case geodesic curves equally minimize length (3) and energy (4), minimizers of the discrete path length (9) are in general not related to discrete geodesics (and thus also not to continuous geodesics as ). Indeed, let us consider a two-dimensional manifold embedded in , paired with the deformation energy for a displacement vector in connecting points and on . Now take into account a continuous geodesic and a discrete path on where the end points are close to each other in the embedding space but far apart on the surface. Figure 4 depicts such a configuration with a discrete path which almost minimizes the discrete path length. A minimizer of the discrete path length will always jump through the protrusion and never approximate the continuous geodesic, whereas minimizers of the discrete path energy satisfy as and thus rule out such a short cut through the ambient space. ## 3 Time discrete geodesic calculus With the notion of discrete geodesics at hand we will now derive a full-fledged discrete geodesic calculus based on a time discrete geometric logarithm and a time discrete exponential map, which then also give rise to a discrete parallel transport and a discrete Levi-Civita connection on shape space. ### 3.1 Discrete logarithm and shape variations If is the unique geodesic on connecting and , the logarithm of with respect to is defined as the initial velocity of the geodesic path. In terms of Section 2.1 we have logO(~O)=v(0) for , where defines the associated family of deformations. On a geodesically complete Riemannian manifold the logarithm exists as long as is sufficiently small. The associated logarithmic map represents (nonlinear) variations on the manifold as (linear) tangent vectors. The initial velocity can be approximated by a difference quotient in time, v(0,x)=1τζ(x)+O(τ), where denotes a displacement on the initial object . Thus, we obtain τlogO(~O)=ζ(x)+O(τ2). This gives rise to a consistent definition of a time discrete logarithm. Let be a discrete geodesic between and with an associated sequence of matching deformations , then we consider for the displacement as an approximation of . Taking into account that we thus define the discrete -logarithm (1KLOG)O(~O):=ζ1. (10) In the special case and a discrete geodesic we simply obtain As in the continuous case the discrete logarithm can be considered as a representation of the nonlinear variation of in the (linear) tangent space of displacements on . On a sequence of successively refined discrete geodesics we expect (11) for (cf. Figure 5 for an experimental validation of this convergence behaviour). ### 3.2 Discrete exponential and shape extrapolation In the continuous setting, the exponential map maps tangent vectors onto the end point of a geodesic with and the given tangent vector at time . Using the notation from the previous section we have and, via a simple scaling argument, for . We now aim at defining a discrete power exponential map such that on a discrete geodesic of order with (the notation is motivated by the observation that on or more general matrix groups). Our definition will reflect the following recursive properties of the continuous exponential map, expO(1v)= (11logO)−1(v), expO(2v)= (12logO)−1(v), expO(kv)= expexpO((k−2)v)(2vk−1) for vk−1:=logexpO((k−2)v)expO((k−1)v). Replacing by , by , and by we obtain the recursive definition EXP1O(ζ):= (11LOG)−1O(ζ), (12) EXP2O(ζ):= (12LOG)−1O(ζ), (13) EXPkO(ζ):= EXP2EXPk−2O(ζ)(ζk−1) (14) It is straightforward to verify that as long as the discrete logarithm on the right is invertible. Equation (12) implies , and (13) in fact represents a variational constraint for a discrete geodesic flow of order  : Given the object we consider discrete geodesic paths of order , where for any chosen the middle object is defined via minimization of (9) so that we may write . We now identify as the object for which , i.e. is the energetically optimal displacement from to and thus satisfies id+ζ=argmin{ψ1|ψ1(O)=~O1[~O2]}WO[ψ1] (15) up to a rigid body motion. Alternatively, the condition (15) can be phrased as id+ζ=argmin{ψ1}min{ψ2|(ψ2∘ψ1)(O)=~O2}(WO[ψ1]+Wψ1(O)[ψ2]). (16) Figure 6 conceptually sketches the procedure to compute . For given initial object and initial displacement the discrete exponential is selected from a fan of discrete geodesics with varying as the terminal point of a discrete geodesic of order in such a way that (16) holds. To compute in the geodesic flow algorithm (13) and (14) we have to find the root of FO,ζ(~O2)=(12LOG)O(~O2)−ζ, (17) implicitly assuming that is small enough so that discrete geodesics are unique (cf. Section 4.2 for the algorithmic realization based on a representation of the unknown domain via a deformation). Equation (14) describes the recursion to compute based on the above single step scheme: For given and one first retrieves from the previous step, where ψk−1=argmin{ψ|ψ(Ok−2)=Ok−1}WOk−2[ψ]. Then (13) is applied to compute from and as the root of . For sufficiently small we expect to be well-defined. Since by definition, every triplet of the sequence is a geodesic of order 2 and minimizes , the resulting family indeed is a discrete geodesic of order . In fact, discrete geodesics that are variationally described as discrete energy minimizing paths between two given objects can be reproduced via the discrete geodesic flow associated with the discrete exponential map (cf. Figure 7). As for the discrete logarithm we experimentally observe convergence of the discrete exponential map in the sense EXPkO(1kζ)→expO(ζ)for k→∞ (18) as shown in Figure 5. An example of geodesic shape extrapolation for multicomponent objects is depicted in Figure 8. ### 3.3 Discrete parallel transport and detail transfer Parallel transport allows to translate a vector (which is considered as the variation of an object ) along a curve in shape space. The resulting changes as little as possible while keeping the angle between and the path velocity fixed. Using the Levi-Civita connection this can be phrased as . There is a well-known first-order approximation of parallel transport called Schild’s ladder [8, 15], which is based on the construction of a sequence of geodesic parallelograms, sketched in Figure 9, where the two diagonal geodesics always meet at their midpoints. Given a curve and a tangent vector , the approximation of the parallel transported vector via a geodesic parallelogram can be expressed as Opk−1 =expO((k−1)τ)ζk−1, O×k =expOpk−112logOpk−1O(kτ), Opk =expO((k−1)τ)2logO((k−1)τ)O×k, ζk =logO(kτ)Opk. Here, is the midpoint of the two diagonals of the geodesic parallogramm with vertices , , , and . This scheme can be easily transferred to discrete curves in shape space based on the discrete logarithm and the discrete exponential introduced above. In the th step of the discrete transport we start with a displacement on and compute Opk−1 =EXP1Ok−1ζk−1, O×k =EXP1Opk−1((12LOG)Opk−1(Ok)), Opk =EXP2Ok−1((11LOG)Ok−1(O×k)), ζk =(11LOG)Ok(Opk), where is the transported displacement on . Here, is the midpoint of the two discrete geodesics of order with end points , and , , respectively. Since the last of the above steps is the inverse of the first step in the subsequent iteration, these steps need to be performed only for . We will denote the resulting transport operator by . Figure 10 shows examples of discrete parallel transport for feature transfer along curves in shape space. Remark: As in the continuous case, the discrete parallel transport can be used to define a discrete Levi-Civita connection. For and for a vector field in the tangent bundle one computes and then defines ∇τξη:=1τ(PO,Oτη(Oτ)−η(O)) as the time discrete connection with time step size . ## 4 Numerical discretization The proposed discrete geodesic calculus requires an effective and efficient spatial discretization of • volumetric objects in the underlying shape space, • of nonlinear deformations to encode matching correspondences, • and of linear displacements as approximate tangent vectors. We restrict ourselves here to the case of objects . To this end we consider the space of piecewise affine finite element functions on a regular simplicial mesh over a rectangular computational domain . Here indicates the grid size, where in our applications ranges from a coarse grid size to a fine grid size . Then, deformations and displacements are considered as functions in . Objects , the original degrees of freedom in our geometric calculus, will be represented via deformations over reference objects (e.g. ), i.e. . These reference objects are encoded by approximate characteristic functions and the deformations are considered as injective deformations and discretized as elements in . ### 4.1 Parameterization of discrete geodesics To compute a discrete geodesic — different from [36] — we now replace the objects as arguments of the energy (9) by associated deformations over a set of reference domains as described above. By this technique, instead of deformations and domain descriptions (e.g. via level sets) as in [36] we will be able to consider solely parameterizing deformations, which turns out to be a significant computational advantage. Next, we assume that reference matching deformations are given with (cf. Figure 11). Now, we express the matching deformations over which we minimize in (9) in terms of the parameterizing deformations and the reference matching deformations and set ψk
2022-01-29 04:23:24
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8459628820419312, "perplexity": 610.7874101656442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299927.25/warc/CC-MAIN-20220129032406-20220129062406-00400.warc.gz"}
https://dsp.stackexchange.com/questions/27878/how-do-delay-lines-create-so-many-effects/29643
# How do delay lines create so many effects? I was trialing this audio plug-in from SonicCharge (there's a program demo in the site). It claims to be based on a delay line and uses some logic to modify it: At its core is a 12-bit digital delay with variable sample rate from $0$ to $352\textrm{ kHz}$. The delay is controlled by a programmable processor that allows you to change and modulate the delay time with various "operators". I know from previous readings that delay lines can be used to construct a pitch shifter. What are the properties in the delay line that allow for such numerous applications? I.e. what is it based on that the delay line can create such a varied amount of effects? • I know from previous readings that delay lines can be used to construct a pitch shifter. Could you back this up with a link or a citation? – Marcus Müller Aug 20 '16 at 9:11 • @MarcusMüller google.com/search?q=delay+line+pitch+shifter – mavavilj Aug 20 '16 at 10:26 • No, I can search myself ;) I meant, to clarify your question, you should explain what you're actually confused about, seeing that you can easily find a lot of ressources on this. It's your job as asker to clarify how much you've understood, and where one should start to explain! – Marcus Müller Aug 20 '16 at 10:29 A DSP / Audio delay line doesnt create any effect in itself, a delay is simply a timed delay in the signal. it has no sound difference, it repeats the sound once and mixes it over original or sends it to a different mix line. A feedback delay repeats the sound multiple times at the delay interval frequency, it sounds like a digital echo, and can become an oscillator above 20hz. almost all fx delays use variable feedback controls to control the number of repetitions of a sound, and they are filtered out as they feed back therefore diminishing. Modern delays can work like grain synth samplers and often are based on them which have integrated effects like pitch shifting and research Grain-Delay and Grain-Resampling for infor about that. Squarepusher is somneone who uses alot of grain delay sometimes... So a delay in effect is barealy an effect, but it can become an osciallator, has complex variants, and can feed into other lines that have any kind of additional FX. To get a firmer grasp of that program i foudn this vid was more informative: it's a pretty simple and very well balanced grain-delay with good controls and some bit crushing types of FX, and probably some filters too. As far as I know delay is like the holy grail of effects processing. First off delay lines can be used to create filters. If we delay a sound by one sample and play it over itself we can make the simplest LP filter. By extension, EQs are made of delay lines, since these are just a number of filters combined. Slap back delays are also, naturally, made of several delay lines, with more time in between than filters which are too close for our ears to perceive as two distinct sounds. Chorus and flangers are also made by delays but here the delay time is modulated. I hope this gives you some insight into the power of delay! "Delay Effect" refers to more than just a simple delay. Typically there is a feedback parameter that allows running the output back into the input so multiple echos are created. This feedback path often has a filter that allows shaping the frequency content of the echos. Fancier delays have multiple "taps" that can be combined or fed back to create specific temporal patterns. When the delay amount varies with time, the frequencies of input signal will indeed be changed. However, that's not a actually pitch shifter (which changes frequency while maintaining the signal timing) but it's more a modulation effect like a Chorus, phaser, of flanger. • I'm specifically wondering why does varying the delay time create those pitch shifting effects. – mavavilj Dec 23 '15 at 19:08 • It's similar to the Doppler effect when I car drives towards/away from you. Driving towards is the same as shortening the delay. See for example dsp.stackexchange.com/questions/26481/… – Hilmar Dec 23 '15 at 21:30 • if you stretch a sound or a color it has lower energy and high energy = high pitch. I have a good grasp of delay effects through filters, into FM, into AM, into filter FM, i couldnt identify that sound effect. It was mostly just a single very refined frequency balance similar to metal bowls in a washbasin, the turning of the controller was simple grain sampler. – DeltaEnfieldWaid Mar 22 '16 at 23:28
2021-05-13 00:17:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37653592228889465, "perplexity": 1614.5874559351544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991413.30/warc/CC-MAIN-20210512224016-20210513014016-00581.warc.gz"}
https://www.physicsforums.com/threads/renormalization-and-divergences.125684/
Renormalization and divergences 1. Jul 10, 2006 eljose Check the webpage.. http://arxiv.org/ftp/math/papers/0402/0402259.pdf [Broken] specially the part of Abel-Plana formula as a renormalization tool... $$\zeta(-m,\beta)-\beta ^{m}/2- i\int_{0}^{\infty}dt[ (it+\beta )^{m}-(-it+\beta )^{m}](e^{2 \pi t}-1)^{-1}=\int_{0}^{\infty}dpp^{m}$$ valid for every m>0 so renormalization for gravity can be possible. :uhh: $$\zeta(-m,\beta)-\beta ^{m}/2- i\int_{0}^{\infty}dt[ (it+\beta )^{m}-(-it+\beta )^{m}](e^{2 \pi t}-1)^{-1}=\int_{0}^{\infty}dpp^{m}$$ Last edited by a moderator: May 2, 2017 2. Jul 10, 2006 CarlB After reading Ramanujan's letter to Hardy, I picked up a copy of Hardy's "Divergent Series" that you reference in this paper. I've always found it an interesting book. But these methods of obtaining finite values for horribly divergent series has always struck me as rather arbitrary and unphysical. In fact, Hardy's book gives examples of sums that have more than one choice of finite sum, depending on how you group the terms and the like. This raises two questions. First, do you have any physical explanation for why these forms should be used? Second, do the sums you obtain this way match the usual methods of QFT? And how do they extend these methods? Carl 3. Jul 11, 2006 eljose -Zeta regularization...has been used before in calculations for "Casimir effect" $$\zeta(-3,0)$$ and in String theory for giving a finite meaning to the series.... 1+2+3+4+5+6+7+8+9+......\rightarrow \zeta(-1,0) [/tex] also an explanation of why it should work is included in Hardy,s book, the Abel-Plana formula is an exact result of complex analysis. Note that here the "Zeta" function used is Hurwitz's zeta 4. Jul 12, 2010 shiekh Operator-regularization is a generalization of the zeta-function that works to all loop orders.
2018-12-14 09:35:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8079683184623718, "perplexity": 1116.6861409191645}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825512.37/warc/CC-MAIN-20181214092734-20181214114234-00313.warc.gz"}
https://aran.library.nuigalway.ie/xmlui/handle/10379/715/browse?order=ASC&rpp=20&sort_by=1&etal=-1&offset=58&type=title
Now showing items 59-78 of 112 • #### Observation of Extended VHE Emission from the Supernova Remnant IC 443 with VERITAS  (2009) We present evidence that the very-high-energy (VHE, E &gt; 100 GeV) gamma-ray emission coincident with the supernova remnant IC 443 is extended. IC 443 contains one of the best-studied sites of supernova remnant/molecular ... • #### Observation of gamma-ray emission from the galaxy M87 above 250 GeV with VERITAS  (2008) The multiwavelength observation of the nearby radio galaxy M87 provides a unique opportunity to study in detail processes occurring in Active Galactic Nuclei from radio waves to TeV gamma-rays. Here we report the detection ... • #### Observations of the shell-type SNR Cassiopeia A at TeV energies with VERITAS  (2010) We report on observations of very high-energy gamma rays from the shell-type supernova remnant Cassiopeia A with the VERITAS stereoscopic array of four imaging atmospheric Cherenkov telescopes in Arizona. The total exposure ... • #### Observations of the Unidentified TeV Gamma-Ray Source TeV J2032+4130 with the Whipple Observatory 10 m Telescope  (2006) We report on observations of the sky region around the unidentified TeV gamma-ray source TeV J2032+4130 carried out with the Whipple Observatory 10 m atmospheric Cherenkov telescope for a total of 65.5 hrs between 2003 and ... • #### Optical Observations of PSR J0205+6449  (2008) PSR J0205+6449 is a X-ray and radio pulsar in supernova remnant 3C 58. We report on observations of the central region of 3C 58 using the 4.2-m William Herschel Telescope with the intention of identifying the optical ... • #### Optical pulsations from the anomalous X-ray pulsar 1E 1048.1-5937  (2009) We present high-speed optical photometry of the anomalous X-ray pulsar 1E 1048.1-5937 obtained with ULTRACAM on the 8.2-m Very Large Telescope in June 2007. We detect 1E 1048.1-5937 at a magnitude of i'=25.3+/-0.2, consistent ... • #### Pulse variation of the optical emission of Crab pulsar  (2007) The stability of the optical pulse of the Crab pulsar is analyzed based on the 1 $\mu$s resolution observations with the Russian 6-meter and William Hershel telescopes equipped with different photon-counting detectors. The ... • #### Radio Imaging of the Very-High-Energy Gamma-Ray Emission Region in the Central Engine of a Radio Galaxy  (2009) The accretion of matter onto a massive black hole is believed to feed the relativistic plasma jets found in many active galactic nuclei (AGN). Although some AGN accelerate particles to energies exceeding 10^12 electron ... • #### A recent change in the optical and gamma ray polarization of the Crab nebula and pulsar  (Oxford journals, 2015-12-31) We report on observations of the polarization of optical and γ-ray photons from the Crab nebula and pulsar system using the Galway Astronomical Stokes Polarimeter (GASP), the Hubble Space Telescope, Advanced Camera for ... • #### Recent Results from the VERITAS Collaboration  (2002) A decade after the discovery of TeV gamma-rays from the blazar Mrk 421 (Punch et al. 1992), the list of TeV blazars has increased to five BL Lac objects: Mrk 421 (Punch et al. 1992; Petry et al. 1996; Piron et al. 2001), ... • #### Refinement and validation of an exposure model for the pharmaceutical industry  (2011) Objectives: Assessment of worker's exposure is becoming increasingly critical in the pharmaceutical industry as drugs of higher potency are being manufactured. The batch nature of operations often makes it difficult to ... • #### Rotational Modulation of M/L Dwarfs due to Magnetic Spots  (2007) We find periodic I-band variability in two ultracool dwarfs, TVLM 513-46546 and 2MASS J00361617+1821104, on either side of the M/L dwarf boundary. Both of these targets are short-period radio transients, with the detected ... • #### A search for brief optical flashes associated with the SETI target KIC 8462852  (IOP Publishing, 2016-02-18) The F-type star KIC. 8462852 has recently been identified as an exceptional target for search for extraterrestrial intelligence (SETI) observations. We describe an analysis methodology for optical SETI, which we have used ... • #### A search for CO+ in planetary nebulae  (2007) We have carried out a systematic search for the molecular ion CO+ in a sample of 8 protoplanetary and planetary nebulae in order to determine the origin of the unexpectedly strong HCO+ emission previously detected in these ... • #### A Search for Dark Matter Annihilation with the Whipple 10m Telescope  (2008) We present observations of the dwarf galaxies Draco and Ursa Minor, the local group galaxies M32 and M33, and the globular cluster M15 conducted with the Whipple 10m gamma-ray telescope to search for the gamma-ray signature ... • #### A search for optical bursts from RRAT J1819-1458: II. Simultaneous ULTRACAM-Lovell Telescope observations  (2011) The Rotating RAdio Transient (RRAT) J1819-1458 exhibits ~3 ms bursts in the radio every ~3 min, implying that it is visible for only ~1 per day. Assuming that the optical light behaves in a similar manner, long exposures ... • #### A search for pulsations from Geminga above 100 GeV with VERITAS  (American Astronomical Society, 2015-02-10) We present the results of 71.6 hr of observations of the Geminga pulsar (PSR J0633+1746) with the VERITAS very-high-energy gamma-ray telescope array. Data taken with VERITAS between 2007 November and 2013 February were ... • #### A Search for Pulsed TeV Gamma Ray Emission from the Crab Pulsar  (1999) We present the results of a search for pulsed TeV emission from the Crab pulsar using the Whipple Observatory's 10m gamma-ray telescope. The direction of the Crab pulsar was observed for a total of 73.4 hours between 1994 ... • #### A Search for Pulsed TeV Gamma Ray Emission from the Crab Pulsar  (1999) We present the results of a search for pulsed TeV emission from the Crab pulsar using the Whipple Observatory's 10m gamma-ray telescope. The direction of the Crab pulsar was observed for a total of 73.4 hours between 1994 ... • #### Search for Pulsed TeV Gamma-ray Emission from the Crab Pulsar  (1999) We present the results of a search for pulsed TeV emission from the Crab pulsar using the Whipple Observatory's 10 m gamma-ray telescope. The direction of the Crab pulsar was observed for a total of 73.4 hours between 1994 ...
2016-05-27 14:26:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.727723240852356, "perplexity": 7629.4363517184875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276780.5/warc/CC-MAIN-20160524002116-00149-ip-10-185-217-139.ec2.internal.warc.gz"}
https://ireggae.com/seagram-s-ikrrra/how-to-calculate-real-gdp-per-capita-92af94
Nominal GDP includes inflation and hence when one makes the comparison of Nominal GDP over different time periods then it would also include growth with respect to inflation and which would inflate the growth rate and the real picture would be hidden. “Table 1.1.9. It gives a rough indication of average living standards. This has been a guide to Real GDP Per Capita Formula. When we calculate real GDP, for example, we take the quantities of goods and services produced in each year (for example, 1960 or 1973) and multiply them by their prices in the base year (in this case, 2005), so we get a measure of GDP that uses prices that do not change from year to year. The rate of chage of the per capita income will be $\frac{0,99024-1}{1} \simeq -0,975 \%$. Calculate the GDP per capita for the country during the year 2018. On the other hand, nominal GDP refers to the value of goods and services measured at the current market prices, i.e., it uses the actual prices paid at any point in time. Bureau of Economic Analysis. Real GDP per Capita measures the average level of national income (adjusted for inflation) per person. Formula to calculate GDP per capita. Solution We are given all the desired inputs to calculate Real GDP per capita. (b) Calculate The Annual Growth Rates Of Real GDP Per Capita (in Percentage) From 2011 To 2014. Therefore, the calculation will be as follows, 1. How to Calculate GDP Per Capita. You can learn more about financing from the following articles –, Copyright © 2021. The formula for GDP per capita is: GDP per capita =Gross Domestic Product / Population. Thus, we can say that from 2017 to 2018, the real GDP of the United States increased by 2.85%. Let's start with the simplest. “Concepts and Methods of the U.S. National Income and Product Accounts,” Pages 4-25–4-26. He has a passion for analyzing economic and financial data and sharing it with others. Recession. Let us look at an example to calculate the real GDP using a sample of a basket of products Solution : Nominal GDP is calculated as: 1. Real GDP is used to compute economic growth. In a Nutshell. Then just divide it by the population. https://goo.gl/mQSvr6 for more FREE video tutorials covering Macroeconomics. To calculate the gross national income per capita, you will use the same information used to calculate the GDP per capita, in addition to any income that residents have brought in as a result of foreign investments. For www.inflateyourmind.com, Principles of Macroeconomics by John Bouman. Nominal GDP. You need to use real GDP so you can be sure you’re calculating real growth, not just price and wage increases. It is widely used in the world to make a comparison of the standard of living across countries over a time period. War ended. 2006 b. Adjustment to peace-time. G… CFA Institute Does Not Endorse, Promote, Or Warrant The Accuracy Or Quality Of WallStreetMojo. Milk = ($12 * 20) + ($13 * 22) + ($15 * 26) =$916 5. The real GDP growth rate shows the percentage change in a country’s real GDP over time, typically from one … It's similar to another measure of inflation, the Consumer Price Index. You are required to calculate the real GDP of the three countries and determine where she would be investing and what would be the allocation of $140 million of the investment amount. The percentage change in real GDP is the GDP growth rate. Question: (a) Calculate The Real GDP Per Capita For Each Economy. When calculating real GDP, we calculate it holding prices constant. Similarly, we can calculate Real GDP Per Capita for remaining countries. The higher the figure the better it is. The per capita income is then 1. Nominal GDP, Table 1.1.1. It's the best way to compare economic indicators like GDP for countries with very different population sizes.Â. 2017 Real GDP per capita (in 2017 prices) was also$\$1.$ 2018 GDP per capita was $\$110 \div \ 105 \approx \$1.048$. Per capita is total GDP divided by number of population. Gross domestic product per capita is a measurement used to determine a country’s economic output in relation to how many people live in the country. $140 million. one means per capita and other is growth. Comencemos con lo más simple. It's usually reported for a quarter or a year. Per capita would mean what is the GDP per person for that economy. Regardless of which formula you need to use, the best way to calculate the real GDP per capita of a certain country is to use the official estimates published by its government agencies, and then simply divide those numbers by the … GDP measures everything produced within a country's borders. Inflation makes regular, “nominal” GDP higher, so real GDP is a more accurate measurement when you want to compare an economy over time., The third is “per capita,” which means “per person.” Real GDP is divided by the population of a country to calculate real GDP per capita. GDP Per Capita =$10 trillion / 250 million 2. Implicit Price Deflators for Gross Domestic Product, Comparing the Consumer Price Index With the Gross Domestic Product Price Index and Gross Domestic Product Implicit Price Deflator. Fruits = ($15 * 25) + ($16 * 30) + ($19 * 35) =$1520 Real GDP is calculate… The population of the country MNS is 100 million. MCX is a developed economy and it is that time of the year when they are required to submit the GDP data which includes per capita as well. The formula to calculate real GDP per capita is represented as below. 1  If you’re looking at just one point in time in one country, then you can use regular “nominal” GDP divided by the current population. Below is the information gathered by the statistician department: The population of the country is 956,899 as per the last census report available. Hence, using real GDP removes the effect of inflation which makes comparison smother. The gross national income per capita also takes into account income that has been earned from interest and dividends overseas. How to Calculate GDP Per Capita The formula is GDP divided by population. Real GDP. Real GDP Per Capita, How to Calculate It, and Data Since 1947, Annual U.S. Real GDP Per Capita Since 1947 in 2012 Dollars. Solution: GDP Per Capita of the country is calculated using the formula given below GDP Per Capita = Real GDP / Population 1. Cheese = ($5 * 50) + ($6 * 40) + ($7 * 50) =$840 4. You are required to calculate GDP per capita or the country X. Based on the information given you are required to calculate Real GDP per capita assuming that the deflator to be used is 18.50%. The analyst is looking for the next developing country where she can invest the clients’ funds of approx. Bureau of Economic Analysis. Fortunately, the BEA provides the deflator for 2012 in Table 1.1.9. Here's the formula to calculate real GDP per capita (R) if you only know nominal GDP (N) and the deflator (D):. Recession ongoing. She writes about the U.S. Economy for The Balance. Per capita GDP is a global measure for gauging the prosperity of nations and is used by economists, along with GDP, to analyze the prosperity of a country based on its economic growth. The real Gross Domestic Product per person, or per capita, is calculated by first adjusting the nominal GDP of a country for inflation by dividing the nominal GDP by the deflator. Below are the details that she has collected. You are required to calculate real GDP per capita. “Nominal” means GDP per capita is measured in current dollars. To calculate GDP per capita, divide the nation's gross domestic product by its population. “Gross Domestic Product.” Accessed July 22, 2020. Similarly, we can now calculate the real GDP growth rate for any other period. e.g. Eisenhower took office. The first concept is “gross domestic product.” That measures everything that a country produces in a year. Calculation of GDP Per Capita can be done as follows: = $400,000,000 / 200,000 GDP Per Capita will be – 1. If the difference in the GDP per capita is less than 10k then she will invest the client’s funds in the ratio of real GDP per capita. In 2017, a country's GDP was \$100 and its population was 100. Bureau of Economic Analysis. Divide the GDP number by the number of people in the United States for GDP per capita. let's consider at t-1 a population of 100 for a GDP of 100. You must understand these first if you want to comprehend GDP per capita. Nominal GDP Formula = Private Consumption + Govt Expenditure + Exports – Imports, =  15,00,000k + 22,50,000k + 7,50,000k – 10,50,000k. The annual growth rate of real Gross Domestic Product (GDP) is the broadest indicator of economic activity -- and the most closely watched. 5 + = thirteen. U.S. Bureau of Labor Statistics. Country X is a growing small economy. Here are two ways you can calculate real GDP per capita: Why Real GDP Is Used to Calculate Growth . The deflator is the ratio of what goods and services would cost today if there had been no inflation since the base year. So, the formula for GDP Per Capita is Total GDP / Total Population If we are looking at a particular point in one country, we can use Nominal GDP which means that the nominal GDP is measured in current $. GDP, (Gross Domestic Product) measures the national output/national income of an economy; this is a measure of the volume of goods and services produced in a given year. For example, the US GDP per Capita is around$20 trillion in gross domestic product (2018) for a population of more than 300 million people. Its components are weighted differently. Real GDP Per Capita Formula refers to calculating the country’s total economic output with respect to per person after adjusting the effect of the inflation. In this previous example, we saw our nominal GDP increase from $50 to$87 despite the fact that we only have only one additional block of cheese but one less bottle of wine. By closing this banner, scrolling this page, clicking a link or continuing to browse otherwise, you agree to our Privacy Policy, Download Real GDP Per Capita Formula Excel Template, New Year Offer - All in One Financial Analyst Bundle (250+ Courses, 40+ Projects) View More, You can download this Real GDP Per Capita Formula Excel Template here –, Investment Banking Training (117 Courses, 25+ Projects), 117 Courses | 25+ Projects | 600+ Hours | Full Lifetime Access | Certificate of Completion, Real GDP Per Capita Formula Excel Template, = ($450,000,000,000 / (1 + 25%)/100,000,000. Sample calculation of per capita GDP. It is measured as the percentage rate of increase in the real gross domestic product (GDP). The resulting value will be the GDP deflator value. In 2018, its GDP was \$110, its population was 105, and the price level rose by 3%. Fortunately, the Federal Reserve Bank of St. Louis already calculated it, as shown below. Learn how … GDP Per Capita = Real GDP / Total Population Enter the exact population for a more accurate answer, or simply use an estimate population for an estimated GDP per capita. Kimberly Amadeo has 20 years of experience in economic analysis and business strategy. “Comparing the Consumer Price Index With the Gross Domestic Product Price Index and Gross Domestic Product Implicit Price Deflator.” Accessed July 22, 2020. This video shows how to calculate nominal and real gross domestic product. The formula for real GDP per capita depends on what data you have available. GDP Per Capita = $40,000 Therefore, the GDP per capita for … GDP Per Capita Formula The following formula is used to calculate the GDP per capita. Per capita real GDP, which is the real GDP divided by the population size, regularly measures the standards of living of the citizens of a given country. A high GDP indicates a healthy economy, which typically leads to high wages and low unemployment. Implicit Price Deflators for Gross Domestic Product.” Accessed July 22, 2020. https://corporatefinanceinstitute.com/.../economics/nominal- Toby Walters is a financial writer, investor, and lifelong learner. The Census Bureau estimated the population was 319 million, so you have$16.768 trillion divided by 319 million, or a per capita GDP of $52,564. This question real GDP growth rate per person is bit clumsy. Here's the formula to calculate real GDP per capita (R) if you only know nominal GDP (N) and the deflator (D): N / D) / C = real GDP per capita The best way to calculate real GDP per capita for the United States is to use the real GDP estimates already published by the Bureau of Economic Analysis. The components of GDP are personal consumption, business investment, government spending and exports minus imports. The Bureau of Economic Analysis reports it quarterly, updating its estimate each month.Â, The second is “real GDP,” which is GDP without the effect of price changes. GDP per capita is a country's economic output per person. How Does the Bureau of Economic Analysis Affect You? = ($450,000,000,000 / (1 + 25%)/100,000,000 She has shortlisted 3 developing countries and now wants to select the country where she can invest either in the stock market or the bond market. If you already know real GDP (R), then you divide it by the population (C): In the United States, the BEA calculates real GDP using 2012 as the base year. If you don't know real GDP, you can calculate it from nominal GDP (N) if you know the implicit price deflator (D). Solution Use below given data for calculation of GDP Per Capita. Vegetables = ($10 * 200) + ($11 * 220) + ($13 * 230) =$7410 2. This economic indicator consists of the following three concepts. Si ya conoce el PBI real (R), entonces solo lo divide por … Accessed July 22, 2020. Bureau of Economic Analysis. Rosemary Njeri. Most of this increase in GDP was due to prices rising, not because we were producing more output. Why Real GDP Is Used to Calculate Growth . Real gross domestic product (GDP) is an inflation-adjusted measure that reflects the value of all goods and services produced by an economy in a given year. You are required to calculate real GDP per capita. Therefore, the calculation will be as follows. GDP is typically figured for periods such as one year or one quarter. Now from the above calculation of Real GDP, we can notice that the differences between all of them are less 10k and hence she would be investing in all the three countries with a ratio of Real GDP per capita and the investment, therefore, shall be: Similarly, we can calculate the investment amount for the remaining countries. At t, you will have a population of 102,5 fo a GDP of 101,5, that is a per capita of 0,99024. CFA® And Chartered Financial Analyst® Are Registered Trademarks Owned By CFA Institute.Return to top, IB Excel Templates, Accounting, Valuation, Financial Modeling, Video Tutorials, * Please provide your correct email id. 2  For example, the GDP for the United States in 2014 was $16.768 trillion. Her criteria to select the country with the highest real GDP per capita. Here's how to calculate the GDP … Here, the ministry is trying to calculate real GDP per capita but before that, we need to calculate real GDP and for that, we will first calculate the nominal GDP. This clip shows how to create a chart of relative GDP per capita in XLS. Here’s how you can calculate the real GDP per capita (R) if you just know the nominal GDP (N) and the price deflator (D): (N / D) / C = real GDP per capita. Here we discuss the formula to calculate Real GDP Per Capita along with practical examples and downloadable excel template. Here's the formula to calculate real GDP per capita (R) if you only know nominal GDP (N) and the deflator (D): N / D) / C = real GDP per capita The best way to calculate real GDP per capita for the United States is to use the real GDP estimates already published by the Bureau of Economic Analysis. Let us take the example of a country with a real GDP of$10 trillion during 2018 and a population of 250 million as on December 31, 2018. Fed raised rates, hurting interest-only loan holders. La fórmula del PIB real per cápita depende de los datos que tenga disponibles. $100 GDP say population is 15 so$6.66 is our per capita. The population of the country MNS is 100 million. Country MNS has a nominal GDP of $450 billion and the deflator rate is 25%. Login details for this Free course will be emailed to you, This website or its third-party tools use cookies, which are necessary to its functioning and required to achieve the purposes illustrated in the cookie policy. The calculation of real GDP per capita will be done by using the below steps: Country MNS has a nominal GDP of$450 billion and the deflator rate is 25%. GDP Growth Rate.Resources for Table, An Annual Review of the U.S. Economy Since 1929, GDP: Understanding a Country's Gross Domestic Product, Why the Fed Uses a Special Measurement for Inflation. (c) Take The Natural Logarithm Of The Per Capita Real GDP And Apply The Approximation Rule Ln(1+x) ~x To Calculate The Annual Growth Rates (in Percentage) From 2011 To 2014. Real GDP per capita is a measurement of the total economic output of a country divided by the number of people and adjusted for inflation. 4  Determine the most recent United States population figure, which is calculated by the U.S. Census Bureau. We are given all the desired inputs to calculate Real GDP per capita. Last year the country has reported its GDP around $400 million and the population of the country as per the last census report available is 200,000. It's used to compare the standard of living between countries and over time. Therefore, the calculation of Real GDP Per Capita will be as follows. For instance, if GDP for a nation is$100 and that nation has a … It is the amount of goods and services produced inside a country. Real GDP Per Capita Formula refers to the formula that is used in order to calculate the country’s total economic output with respect to per person after adjusting the effect of the inflation and as per the formula Real GDP Per Capita is calculated by dividing the real GDP of the country (country’s total economic output adjusted by inflation) by the total number of persons in the country. GDP is the gross domestic product of a country. The adjusted number, or real GDP, is then divided by the country's population. Per Capita: What It Means, Calculation, How to Use It, Concepts and Methods of the U.S. National Income and Product Accounts, Table 1.1.9. BEA, National Income and Product Accounts Tables: Table 1.1.5. Economic growth is defined as the increase in the market value of the goods and services produced by an economy over time. Then: 2017 GDP per capita was $\$100 \div \ 100 = \$1$. With the formula I gave you: The best way to calculate real GDP per capita for the United States is to use the real GDP estimates already published by the Bureau of Economic Analysis. “What Is GDP?” Accessed July 22, 2020. Solution We are given all the desired inputs to calculate Real GDP per capita. GDP per capita is a country's GDP divided by its population. Juice = ($8 * 130) + ($10 * 110) + ($11 * 90) =$3130 3. And business strategy = ( $450,000,000,000 / ( 1 + 25 % /100,000,000. Or a year population 1 cost today if there had been no since... Measured in current dollars capita was$ \ $1$, you will have a population the. $\frac { 0,99024-1 } { 1 } \simeq -0,975 \ %$ Accuracy Quality. Will be as follows and lifelong learner been a guide to real per.  https: //goo.gl/mQSvr6 for more FREE video tutorials covering Macroeconomics and the level! Https: //goo.gl/mQSvr6 for more FREE video tutorials covering Macroeconomics clip shows how to a! Clip shows how to calculate real GDP is the amount of goods services! A nation is $100 and that nation has a nominal GDP formula = Private Consumption + Expenditure. Here are two ways you can be done as follows across countries over a time period most this. Reserve Bank of St. Louis already calculated it, as shown below \ 100 = \ 1! Measures everything that a country 's GDP divided by its population of economic analysis business... Is 15 so$ 6.66 is our per capita is a country economic! And that nation has a nominal GDP formula = Private Consumption + Govt Expenditure + –! Measured in current dollars the GDP per capita “what is GDP divided by number of population / ( +! You need to Use real GDP per capita increase in the real GDP per capita = real GDP will. \Div \ 100 = \ $1$ not Endorse, Promote, or Warrant the Accuracy or of! Capita formula consists of the country X billion and the Price level rose 3! Relative GDP per capita formula it gives a rough indication of average living standards country the... Growth Rates of real GDP of the per capita ( in percentage ) from 2011 to 2014 reported a... Be – 1 Price level rose by 3 % the Accuracy or Quality of WallStreetMojo this increase in real. Say that from 2017 to 2018, its GDP was \ $1$ population sizes. over... 2.85 % ( $450,000,000,000 / ( 1 + 25 % ) /100,000,000 Rosemary Njeri / 200,000 per. Average level of national income and Product Accounts Tables: Table 1.1.5 shown below given...? ” Accessed July 22, 2020 below is the gross domestic Product / 1. Is calculated using the formula to calculate real GDP, we can calculate... Capita assuming that the deflator to be used is 18.50 % and business strategy bit clumsy writes about the economy... Does not Endorse, Promote, or real GDP per capita country where she can invest the clients funds... To calculate real GDP per capita measures the average level of national income and Product Accounts, ” Pages.. Divide the GDP … Sample calculation of GDP per capita in XLS that is a country 's economic output person! Everything produced within a country 's borders of increase in the market value of the U.S. national per. This has been a guide to real GDP growth rate Price and wage increases the.. So$ 6.66 is our per capita want to comprehend GDP per capita =Gross Product. And wage increases years of experience in economic analysis Affect you then: 2017 GDP per in. Https: //goo.gl/mQSvr6 for more FREE video tutorials covering Macroeconomics investor, and lifelong learner is measured as the in... 'S how to calculate growth standard of living across countries over a time period 6.66 is our per capita real. Economy for the United States for GDP per capita the formula for GDP per capita can be done follows... Population 1 now calculate the Annual growth Rates of real GDP per capita ... A … country X t, you will have a population of standard! Per person 100 for a quarter or a year formula to calculate growth of economic analysis and business strategy =... } { 1 } \simeq -0,975 \ % $compare economic indicators like GDP for quarter. + Exports – Imports, = 15,00,000k + 22,50,000k + 7,50,000k –.. 2018, its GDP was \$ 100 and its population was 100 Census... One year or one quarter Census report available calculate nominal and real gross domestic.. Annual growth Rates of real GDP per capita, divide the nation 's gross domestic implicit... Are two ways you can be sure you ’ re calculating real GDP 22,50,000k + 7,50,000k – 10,50,000k is GDP. More about financing from the following articles –, Copyright © 2021 formula Private... In percentage ) from 2011 to 2014 \frac { 0,99024-1 } { 1 } -0,975... Deflator is the ratio of what goods and services produced inside a.! 100 \div \ 100 = \ $100 GDP say population is 15 so$ 6.66 is our per of... Growth Rates of real GDP per capita for the Balance want to comprehend GDP per capita how to real... Calculate nominal and real gross domestic product.” Accessed July 22, 2020 250 million 2 calculated by U.S.... Of Macroeconomics by John Bouman consists of the country MNS is 100 million two ways you learn... Per cápita depende de los datos que tenga disponibles not because we producing! A rough indication of average living standards the Annual growth Rates of real GDP you... Real per cápita depende de los datos que tenga disponibles 16.768 trillion growth is defined as percentage... Very different population sizes. and its population clip shows how to create a of. This has been earned from interest and dividends overseas as per the Census... –, Copyright © 2021 economic indicators like GDP for the United States in 2014 was $trillion! Of inflation, the Consumer Price Index and gross domestic Product /.. Base year divide the GDP growth rate per person last Census report available, 1 average of! A passion for analyzing economic and financial data and sharing it with others that from 2017 to 2018, GDP... Calculated it, as shown below 22, 2020 economy, which is calculated using the formula given GDP. Implicit Price Deflators for gross domestic Product ( GDP ) different population sizes. everything that a country 's population been!? ” Accessed July 22, 2020 for analyzing economic and financial data and sharing with. Formula is GDP? ” Accessed July 22, 2020 account income that has been earned from interest and overseas. One year or one quarter funds of approx be the GDP per person is bit.... 100 GDP say population is 15 so$ 6.66 is our per capita and... = 15,00,000k + 22,50,000k + 7,50,000k – 10,50,000k or the country X is country. Product.€ that measures everything that a country first if you want to comprehend GDP per capita will be –.... Rate for any other period July 22, 2020 similar to another measure of inflation, the real,... Is widely used in the market value of the U.S. economy for the next country. Per capita of 0,99024 which is calculated by the number of people in the United in... Ratio of what goods and services produced by an economy over time formula calculate. X is a growing small economy everything that a country 's economic output per person with... Capita of the goods and services produced inside a country 's population, which typically leads to high and! One year or one quarter data you have available are given all desired. Of the country X is a country 's economic output per person for that economy measures everything produced within country! Or Quality of WallStreetMojo a ) calculate the GDP number by the number of in! Warrant the Accuracy or Quality of WallStreetMojo concept is “gross domestic product.” Accessed July 22 2020... Rate per person for that economy gross domestic Product implicit Price Deflator.” July. Given all the desired inputs to calculate real GDP so you can be you... Is 18.50 % level rose by 3 % for remaining countries about the Census! Of $450 billion and the Price level rose by 3 % cost today if there had no... Capita also takes into account income that has been a guide to real GDP is... Gdp removes the effect of inflation, the calculation of GDP per capita along with practical and... 10 trillion / 250 million 2 the nation 's gross domestic product.” Accessed July 22 2020... Done as follows: =$ 400,000,000 / 200,000 GDP per capita: Why real GDP capita... Nominal ” means GDP per capita for Each economy we calculate it holding prices constant that has a! More output 's consider at t-1 a population of the country is 956,899 as per last... Rising, not because we were producing more output that is a financial writer, investor, the... In XLS similar to another measure of inflation, the Consumer Price Index with the formula I you... Divided by the U.S. national income per capita producing more output with practical examples and downloadable template. Is then divided by number of population that has been earned from interest and dividends overseas of per capita a... Also takes into account income that has been earned from interest and overseas. This video shows how to calculate nominal and real gross domestic Product by its population was.! Our per capita = real GDP per capita =Gross domestic Product by its population 100! Rates of real GDP is the GDP … Sample calculation of per capita divide. Recent United States for GDP per capita along with practical examples and downloadable excel.... Inside a country was 100 chage of the country with the formula for real GDP / population 1 to!
2021-06-21 16:47:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43662071228027344, "perplexity": 2063.5515975837498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488286726.71/warc/CC-MAIN-20210621151134-20210621181134-00372.warc.gz"}
https://www.varsitytutors.com/act_math-help/integers/arithmetic/operations
# ACT Math : Operations ## Example Questions ← Previous 1 3 ### Example Question #1 : Operations A car averages 29 miles per gallon.  If gas costs $3.75 per gallon how much money would need to be spent on gas to travel 1464.5 miles? Possible Answers:$189.38 $5491.88$108.75 $50.5 Correct answer:$189.38 Explanation: The question is asking for the amount of money that would need to be spent on gas.  Using the value given for the miles per gallon of the car and the amount of miles traveled it is possible to determine the gallons of fuel that will be required.  This will be set up as 1464.5 miles travelled divided by the 29 miles per gallon that the car gets.  This leaves us with 50.5 gallons of fuel used.  From this point we can multiply the amount of fuel used, 50.5 gallons, by the price of fuel per gallon, $3.75, to obtain the amount of money that will spent on fuel,$189.38. ### Example Question #2 : Operations Connie's car gets 35 miles per gallon of gas.  How much gas will she need to take a 525 mile trip?  Round to the nearest gallon. 17 15 20 25 23 15 Explanation: This becomes a division problem: 525 miles ÷ 35 miles per gallon = 15 gallons ### Example Question #2 : How To Divide Integers A father buys a bag of marbles.  He divides the marbles among his 5 children, who receive 24 marbles each. If there had been 6 children, how many marbles would each one get? 22 24 20 18 20 Explanation: There were a total of 120 marbles to begin with, since 5 * 24 = 120. If the marbles are split between 6 children, then each child gets 120/6 = 20 marbles each. ### Example Question #1 : Operations At the end of December, Abraham adds the average number of stamps he has been collecting over the year to his album, how many stamps will he have at the end of December? Explanation: In 11 months, Abraham has collected 155 stamps, or an average of  stamps per month. At the end of December, he should have  stamps. ### Example Question #1 : How To Subtract Integers On an airline flight of 250 people there is a choice of pasta, salad, or pizza for dinner. If 113 passengers choose pasta and 30% choose salad, how many passengers choose pizza? 53 62 100 75 62 Explanation: 30% of 250 is 75. Therefore, 250 – 75 – 113 = 62 ### Example Question #1 : How To Subtract Integers At a store, a t-shirt costs . If Shane bought  t-shirts and shoes from the store for , how much were the shoes? Explanation: If one shirt costs , then two shirts must cost . The difference of the total and this number is the cost of the the shoes. ### Example Question #2 : Operations is equivalent to which of the following? Explanation: To answer this question, we must remember how to distribute a negative when we are subtracting an entire expression. To distribute a negative, you change the sign of everything within the expression being subtracted from positive to negative or negative to positive. You then add the resulting expression. So, for this data: We can then add the like expressions normally (combine  with ,  with , and  with ). Therefore, for this data: ### Example Question #1 : How To Multiply Integers A hockey team wants to purchase uniforms for its 26 players, 2 of which are goalies.  It costs $40 for a one-time designing fee. It costs$24 for goalie jerseys and $20 for all other jerseys. How much does it cost to purchase all of the uniforms? Possible Answers:$664 $568$520 $528$608 $568 Explanation: There are 24 normal players and 2 goalies. Thus we can represent the sum in the equation 24($20) + 2($24) +$40 = $568. ### Example Question #1 : How To Multiply Integers Sam is paid$15 per hour for laying bricks. If he lays 3,000 bricks at a rate of 5 bricks per minute, how much will he be paid? $1,500$150 $300$200 $9,000 Correct answer:$150 Explanation: First we need to convert some units. If Sam lays 5 bricks per minute, then multiply that figure by 60 to find that he lays 300 bricks per hour. 3,000 bricks divided by 300 bricks/hour means he worked for 10 hours. Therefore, when we mutiply $15/hour by 10 hours, we see he makes$150. ### Example Question #1 : How To Multiply Integers Joey is beginning to invest in the stock market. He purchased 150 shares of a cell phone company.  The company pays a dividend of $2.55 per share. What is his total dividend? Possible Answers:$280.25 $301.25$255.00 $382.50$451.75
2020-10-21 10:30:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.538362443447113, "perplexity": 1493.255046047365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107876307.21/warc/CC-MAIN-20201021093214-20201021123214-00673.warc.gz"}
https://tex.stackexchange.com/questions/169475/referencing-in-subfloat-not-working
# referencing in subfloat not working \documentclass[journal]{./IEEE/IEEEtran} \usepackage{cite,graphicx} \usepackage{url} \usepackage{float} \usepackage{subfig} \begin{document} \begin{figure}[H] \end{figure} one of the... \ref{piknik} .... \end{document} I want to reference an image from the figure to a paragraph using \ref but it is not working. What should I do? When you process your example document you'll receive several warnings; the second one is something like the following: Package caption Warning: \label without proper \caption on input line 10. See the caption package documentation for explanation. which suggests that you had the \label for the subfigure in the wrong position (in fact, it was outside \subfloat). The safest place to use \label with \subfloat is either right after the caption inside the first optional argument (as in my example below) or inside the mandatory argument: \documentclass[journal]{IEEEtran} \usepackage{cite} \usepackage[demo]{graphicx} \usepackage{url} \usepackage{float} \usepackage[caption=false]{subfig} \begin{document} \begin{figure}[H] \subfloat[Gardenia]{\includegraphics[scale=.035,angle=-90]{images/gardenia2.jpg}} \end{figure} A cross-reference to subfigure~\ref{piknik} .... \end{document} Notice that using the caption package with the IEEEtran document class might produce undesired results; you should load the subfig package with the caption=false option, as I did in my example code. The demo option for graphicx simply replaces actual figures with black rectangles; do not use that option in your actual document. Of course, restore the original first line to load the class. • \usepackage[caption=false]{subfig} is the proper way with IEEEtran. – egreg Apr 4 '14 at 9:10
2020-01-23 23:22:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9790870547294617, "perplexity": 4211.061322538591}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250614086.44/warc/CC-MAIN-20200123221108-20200124010108-00002.warc.gz"}
https://stats.stackexchange.com/questions/82356/robust-monotonic-regression-in-r
# Robust monotonic regression in R I have the following table in R df <- structure(list(x = structure(c(12458, 12633, 12692, 12830, 13369, 13455, 13458, 13515), class = "Date"), y = c(6080, 6949, 7076, 7818, 0, 0, 10765, 11153)), .Names = c("x", "y"), row.names = c("1", "2", "3", "4", "5", "6", "8", "9"), class = "data.frame") > df x y 1 2004-02-10 6080 2 2004-08-03 6949 3 2004-10-01 7076 4 2005-02-16 7818 5 2006-08-09 0 6 2006-11-03 0 8 2006-11-06 10765 9 2007-01-02 11153 I can plot the points and a Tukey's linear fitting (line function in R) via plot(data=df, y ~ x) lines(df$x, line(df$x, df$y)$fitted.values) which produces: All fine. The above plot shows energy consumption values, expected only to increase, so I'm happy with the fit not passing through those two points (which will be subsequently flagged as outliers). However, "just" removing the last point and replot again df <- df[-nrow(df),] plot(data=df, y ~ x) lines(df$x, line(df$x, df$)$fitted.values) The result is completely different. My need is to have ideally the same result in both scenarios above. R doesn't seem to have ready to use function for monotonic regression, besides isoreg which however is piecewise constant. EDIT: As @Glen_b pointed out the outliers-to-sample size ratio is too big (~28%) for the regression technique used above. However, I believe there might be something else to consider. If I add the points at the beginning of the table: df <- rbind(data.frame(x=c(as.Date("2003-10-01"), as.Date("2003-12-01")), y=c(5253,5853)), df) and recalculate again like above plot(data=df, y ~ x); lines(df$x, line(df$x,df$y)$fitted.values) I get the same result, with a ration of ~22% • Could you explain to us what you mean by "Tukey's line"? (He used various resistant line fitting methods.) – whuber Jan 15 '14 at 15:27 • @whuber oh I see, sorry. Is the method implemented in the R function line. You can have more details by typing ?line in the r console – Michele Jan 15 '14 at 15:30 • Thanks, but I'm afraid that's no good at all: the help merely refers to Tukey's 1977 EDA book--with which I am quite familiar and in which I can identify many line-fitting methods--and the code simply invokes a C program. Maybe we could make progress if you could explain more clearly what you are trying to achieve. How would you characterize (in general) the difference between your two "scenarios"? Why do you prefer the first solution? – whuber Jan 15 '14 at 15:33 • (+1) "Should only increase" is key: you are asking about how to perform (robust) monotonic regression. It would help to emphasize that point more in your question: you will get better answers. – whuber Jan 15 '14 at 15:51 • @Michele Maybe you could have a look at the nnls package (non-negative least squares). That should help you with the positivity constraints, but not with the outliers. – Matteo Fasiolo Jan 16 '14 at 11:14 I note that after you delete the last point, you only have seven values of which two (28.6%!) are outliers. Many robust methods don't have a breakdown point quite that high (e.g., Theil regression breaks down right at that point for n=7, though at large $$n$$ it goes to 29.3%), but if you must have such a high breakdown that it can manage so many outliers, you need to choose some approach that actually has that higher breakdown point. There are some available in R; the rlm function in MASS (M-estimation) should deal with this particular case (it has high breakdown against y-outliers), but it won't have robustness to influential outliers. Function lqs in the same package should deal with influential outliers, or there are a number of good packages for robust regression on CRAN. You may find Fox and Weisberg's Robust Regression in R (pdf) a useful resource on several robust regression concepts. All this is just dealing with robust linear regression and is ignoring the monotonicity constraint, but I imagine that will be less of a problem if you get the breakdown issue sorted. If you're still getting negative slope after performing high-breakdown robust regression, but want a nondecreasing line, you would set the line to have slope zero - i.e. choose a robust location estimate and set the line to be constant there. (If you want robust nonlinear-but-monotonic regression, you should mention that specifically.) In response to the edit: You seem to have interpreted my example of the Theil regression as a comment about the breakdown point of line. It was not; it was simply the first example of a robust line that came to me which broke down at a smaller proportion of contamination. As whuber already explained, we can't easily tell which of several lines is being used by line. The reason why line breaks down as it does depends on which of several possible robust estimators that Tukey mentions and line might use. For example, if it's the line that goes 'split data into three groups and for the slope use the slope of the line joining the medians of the outer two thirds' (sometimes called the three-group resistant line, or the median-median line), then its breakdown point is asymptotically 1/6, and its behavior in small samples depends on exactly how the points are allocated to the groups when $$n$$ is not a multiple of 3. Please note that I am not saying it is the three group resistant line that is implemented in line - in fact I think it isn't - but simply that whatever they have implemented in line may well have a breakdown point such that the resulting line cannot deal with 2 odd points out of 8, if they're in the 'right' positions. In fact the line that is implemented in line has some bizarre behavior - so odd that I wonder if it might have a bug - if you do this: x = y = 1:9 #all points lie on a line with slope 1 plot(x,y) abline(line(x,y),col=2) Then the line line has slope 1.2: Off the top of my head, I don't recall any of Tukey's lines having that behavior. • Hi thanks for all of that. At the moment my need is linear monotonic regression. The breakdown point is very interesting and I definitely should have considered. However, can you please comment the third chart I have just added? I add two point at the beginning which should bring the ratio to 22% – Michele Jan 16 '14 at 10:46 • by the way lqs does the job! So I accept your answer :-) thanks a lot. If you could still help me out in understanding the third chart would be great! Cheers – Michele Jan 16 '14 at 11:24 • I have made an edit that I hope clarifies things somewhat. If I've missed something, let me know. – Glen_b -Reinstate Monica Jan 16 '14 at 22:51 • Many thanks. I'm not a statistician (easy to tell), and you've been very helpful! I think I'll stick with lqs for now. Thanks again – Michele Jan 17 '14 at 10:22
2019-12-16 11:19:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5802156925201416, "perplexity": 1076.2278950742746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541319511.97/warc/CC-MAIN-20191216093448-20191216121448-00478.warc.gz"}
https://socratic.org/questions/if-sides-a-and-b-of-a-triangle-have-lengths-of-9-and-12-respectively-and-the-ang
# If sides A and B of a triangle have lengths of 9 and 12 respectively, and the angle between them is (pi)/3, then what is the area of the triangle? Dec 18, 2015 $27 \sqrt{3}$ #### Explanation: Taking a look at the triangle, we have something like the following: From the right triangle containing the angle $\frac{\pi}{3}$ we have $\sin \left(\frac{\pi}{3}\right) = \frac{h}{9} \implies h = 9 \sin \left(\frac{\pi}{3}\right) = \frac{9 \sqrt{3}}{2}$ Then, applying the area formula $A = \frac{1}{2} b h$ $\text{area} = \frac{1}{2} \left(12\right) \left(\frac{9 \sqrt{3}}{2}\right) = 27 \sqrt{3}$
2021-09-27 21:31:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9474385380744934, "perplexity": 248.89238604894516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058552.54/warc/CC-MAIN-20210927211955-20210928001955-00701.warc.gz"}
http://computationalnonlinear.asmedigitalcollection.asme.org/article.aspx?articleid=1694117
0 Research Papers # Simple Recipe for Accurate Solution of Fractional Order Equations [+] Author and Article Information Sambit Das Department of Mechanical Engineering, IIT Kharagpur 721302, India e-mail: sambitiitkgp@gmail.com Anindya Chatterjee Department of Mechanical Engineering, IIT Kanpur 208016, India e-mail: anindya100@gmail.com Equation (6) can now be used directly to determine $x·$, which can be used in Eq. (5) to find $a·$. 1Corresponding author. Former address: Indian Institute of Technology, Kharagpur, 721302. Contributed by the Design Engineering Division of ASME for publication in the Journal of Computational and Nonlinear Dynamics. Manuscript received May 4, 2012; final manuscript received October 21, 2012; published online December 19, 2012. Assoc. Editor: J. A. Tenreiro Machado. J. Comput. Nonlinear Dynam 8(3), 031007 (Dec 19, 2012) (7 pages) Paper No: CND-12-1069; doi: 10.1115/1.4023009 History: Received May 04, 2012; Revised October 21, 2012 ## Abstract Fractional order integrodifferential equations cannot be directly solved like ordinary differential equations. Numerical methods for such equations have additional algorithmic complexities. We present a particularly simple recipe for solving such equations using a Galerkin scheme developed in prior work. In particular, matrices needed for that method have here been precisely evaluated in closed form using special functions, and a small Matlab program is provided for the same. For equations where the highest order of the derivative is fractional, differential algebraic equations arise; however, it is demonstrated that there is a simple regularization scheme that works for these systems, such that accurate solutions can be easily obtained using standard solvers for stiff differential equations. Finally, the role of nonzero initial conditions is discussed in the context of the present approximation method. <> ## Figures Fig. 1 Solution of Eq. (4), and Eqs. (5) and (6) with initial conditions a(0) = 0 and. (a) The two solutions are visually close on the graph. (b) The error is small. Fig. 2 Solution of Eq. (7), and Eqs. (8) and (9) with initial conditions a(0) = 0 and x(0) = 0. (a) The two solutions are visually close on the graph. (b) The error is small. Fig. 3 Galerkin solution of Eq. (10) using Eqs. (13) and (14) with N = 10, and numerical solution of Eqs. (11) and (12) with initial conditions a(0) = 0. (a) The two solutions are visually close on the graph. (b) The error (here the difference between the two solutions) is small. Fig. 4 Solution of Eq. (15) (x = t1/3), and Eqs. (16) and (17) with initial conditions a(t0) = 0 and x(t0) = 0, t0 = 10-8. (a) The two solutions are visually close on the graph. (b) The error is small. Fig. 5 Solution of Eq. (18) from Eqs. (19), (20), (21) and (22) with initial conditions x(0) = 0, x·(0) = 0 and a1(0) = a2(0)= a3(0) = 0 Fig. 6 Solution of Eq. (23) (from Maple), and Eqs. (25), (26), and (27) with initial conditions a1(0) = 0, a2(0) = 0, and x(0) = 0. (a) The two solutions are visually close on the graph. (b) The error is small. Fig. 7 (a) Two different square pulses used as forcing. (b) Solution of Eq. (18) with initial conditions x(0) = 0 and x·(0) = 0. x(1) is the same in each case, but the subsequent unforced evolutions differ because the system retains a memory of past forcing. ## Discussions Some tools below are only available to our subscribers or users with an online account. ### Related Content Customize your page view by dragging and repositioning the boxes below. Related Journal Articles Related eBook Content Topic Collections
2017-06-23 15:41:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25070905685424805, "perplexity": 852.2916052719264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320070.48/warc/CC-MAIN-20170623151757-20170623171757-00078.warc.gz"}
https://codereview.stackexchange.com/questions/251564/recursion-with-char-in-c?noredirect=1
# Recursion with char[] in c# I am trying to learn recursion and have a question involving an array that needs to be reversed. I am focusing on c# but the language probably doesn't matter a whole lot because my function is not using any libraries. Here is my code: char[] ReverseString(char[] s, int i = 0) { char[] x = s; char temp = s[i]; s[i] = s[s.Length - (i + 1)]; s[s.Length - (i + 1)] = temp; i++; if (i < (s.Length) / 2) { ReverseString(s, i); } return x; } Any suggestions on how I should improve my function (maybe time complexity or using libraries (nuget packages)) is appreciated as well. • The same problem has been address many times: 1, 2, 3, etc... – Peter Csala Nov 4 '20 at 9:51 • Tip: System.Linq is great when you want to do something with arrays or collections string hello = "Hello World!"; string olleh = new string(hello.Reverse().ToArray()); Output: !dlroW olleH. – aepot Nov 6 '20 at 18:40
2021-06-21 09:45:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1862412393093109, "perplexity": 4074.6832920302386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488269939.53/warc/CC-MAIN-20210621085922-20210621115922-00412.warc.gz"}
https://fhassler.de/talks/
One of the great side effects of my work is that it allows me to travel frequently and meet many inspiring people. I keep a record of all the places I have visited yet on this page. Additionally, you can find the slides or notes of my recent talks here. ## Invited talks at conferences and workshops 1. "Poisson-Lie T-duality, Integrability and Qauntum Corrections", based on [5,4,3], abstract Poisson-Lie groups emerge naturally in the classical limit of quantum groups. Besides their important role in mathematics, they are also central to the phenomena of T-duality in physics. Originally, T-duality arises in the context of string theory but over that last decade it has also become an essential tool to study integrable two-dimensional $\sigma$-models. While this approach works very well in the classical regime, we only started to understand its implications for quantum corrections last year. After giving an introduction to Poisson-Lie T-duality and integrable $\sigma$-models, I will discuss these recent developments and their implications. 2. Integrability, dualities and deformations, 30th March - 3rd April2020, Hospedería San Martín Pinario, Santiago de Compostela, Spain canceled 3. AMS Spring Southeastern Sectional Meeting, 13th - 15th March 2020, University of Virginia, Charlottesville, USA canceled 4. "Consistent Truncations and Dualities", based on [6], slides, abstract Southwest Strings Meeting 2020, 14th - 15th February 2020, Department of Physics, Utah State University, Logan, USA 5. "The Many Facets of Poisson-Lie T-duality", based on [9,10], slides, abstract Poisson-Lie T-duality was originally introduced to identify the dynamics of closed strings probing different target spaces. But nowadays it also has become a crucial ingredient in the construction of integrable, two-dimensional $\sigma$-models. After reviewing this intriguing connection from a worldsheet perspective, I will switch to the target space. There we are going to see that Poisson-Lie T-duality naturally appears in the context of gauged SUGRAs and consistent truncations which help to construct new AdS vacua. Integrability, duality and beyond , 3rd - 5th June 2019, Facultade de Física, Universidade de Santiago de Compostela, Santiago de Compostela, Spain 6. "Poisson-Lie Symmetry and Double Field Theory" (two one hour review talks), based on [10,13,15,20,21], slides, abstract Both constituents of my title are well established areas of research with a wide range of applications. Unfortunately, in the current standard formulation of DFT only the tiny subset of PL symmetry which gives rise to abelian T-duality is manifest. In my talk, I present an altered DFT version, DFT on group manifolds, which make the full PL symmetry manifest. We discuss both the NS/NS and R/R sector of the theory. Later allows us to derive the transformation rules for R/R field strengths under full PL T-duality for the first time. If time permits, I will also comment on applications in integral deformations and the extension of the framework to also capture dressing cosets. "String: T-duality, Integrability and Geometry", 4th - 8th March 2019, Tohoku University, Sendai, Japan 7. "Poisson-Lie T-Duality in Double Field Theory", based on [13,10], slides, abstract A formulation of Double Field Theory is presented which makes Poisson-Lie T-duality manifest. It allows to identify the doubled space with a Drinfeld double and provides a powerful tool to extract the transformation of the metric, B-field, dilaton and R/R potentials under Poisson-Lie T-duality. Fundamental Interactions, Geometry and Topology., 25th October 2018, University of Naples Federico II, Department of Physics "Ettore Pancini", Italy 8. "Taking Advantage of Poisson-Lie Symmetry", based on [13,10], slides, abstract Dualities and Generalized Geometries, 10th - 15th September 2018, Corfu, Greece 9. 10. "Integrability, Poisson-Lie Symmetry and Double Field Theory", based on [13,20], slides, abstract I review how integrability allows us to explore the planar limit of the AdS/CFT correspondence for arbitrary values of the t'Hooft coupling. In string theory integrability of the 2D $\sigma$-model is closely related to Poisson-Lie Symmetry. Double Field Theory can be used to make this symmetry manifest and therewith provides a new tool to study the implications for the gravity side of the correspondence. 11. "Poisson-Lie T-Duality in Double Field Theory", based on [13,20], slides, abstract A formulation of Double Field Theory is presented which makes Poisson-Lie T-duality manifest. It allows to identify the doubled space with a Drinfeld double and provides a powerful tool to extract the transformation of the metric, $B$-field, dilaton and R/R potentials under Poisson-Lie T-duality. String Dualities and Geometry, 15th - 19th January 2018, Centro Atomico Bariloche, San Carlos de Bariloche, Argentina 12. "Surprisingly Complex Punctures from a Dynamical System" (short version), based on [12], slides, abstract Theories of class S are 4D N=2 SCFTs which result from the compactification of 6D N=(2,0) SCFTs on punctured Riemann surfaces. They provide a geometric perspective on S-duality and are essential in the AGT correspondence. I will present the first step in extending this construction to N=1. To this end, we discuss the punctures relevant in the compactification of the world-volume theory of M5-branes probing an ADE-singularity. They are closely related to the time evolution of a dynamical system and exhibit a surprisingly rice and complex structure compared to N=2. String Pheno 2017, 3rd - 7th July 2017, Virgina Tech, Blacksburg, USA 13. "Extended Space for (half) Maximally Supersymmetric Theories", based on [14,15], slides, abstract Recent Advances in T/U-dualities and Generalized Geometries, 6th - 9th June 2017, Rudjer Bošković Institute, Zagreb, Croatia 14. "Surprisingly Complex Punctures from a Dynamical System" (long version), based on [12], slides, abstract Theories of class S are 4D N=2 SCFTs which result from the compactification of 6D N=(2,0) SCFTs on punctured Riemann surfaces. They provide a geometric perspective on S-duality and are essential in the AGT correspondence. I will present the first step in extending this construction to N=1. To this end, we discuss the punctures relevant in the compactification of the world-volume theory of M5-branes probing an ADE-singularity. They are closely related to the time evolution of a dynamical system and exhibit a surprisingly rich and complex structure compared to N=2. 17th southeastern regional mathematical string theory meeting, 8th April 2017, UNC Chapel Hill, Chapel Hill, USA 15. "Generalized Parallelizable Spaces from Exceptional Group Manifolds", based on [14,17], slides, abstract Generalized Geometry & T-dualities, 9th - 13th May 2016, Simons Center for Geometry and Physics, Stony Brook, USA 16. "Double Field Theory on Group Manifolds", based on [19,20,21], slides, abstract 17. "String Geometry Beyond the Torus", based on [21], slides, abstract Besides propagating in target space like a point particle, a closed string is also able to wind around non-contractible circles. A direct consequence thereof is T-duality. In the textbook example, it identifies the closed string dynamics on a large and a small circle by interchanging its winding and momentum modes. Patching a background by such dualities clearly goes beyond the notion of conventional geometry. However, there are extensive efforts to embed them into a framework called string geometry. It provides access to a vast number of new backgrounds with intriguing phenomenology, like e.g. the possibility to obtain de Sitter vacua. Double Field Theory (DFT) is the most promising approach to describe these backgrounds and their properties. But still, it is closely related to the torus. I will present a theory based on Closed String Field theory starting from a Wess-Zumino-Witten model which goes beyond the torus. It plays an important role in clarifying the recent confusion about different constraints in DFT. Furthermore it allows to uplift a large class of new backgrounds to string theory. These backgrounds are not T-dual to any geometric ones. 12th southeastern regional mathematical string theory meeting, 25th October 2014, Duke University, Durham, USA 18. "DFT Beyond the Torus", based on [21], abstract Workshop "Frontiers in String Phenomenology'', 28th July - 1st August2014, Ringberg Castle, Tegernsee, Germany 19. "Consistent Compactification of Double Field Theory on Non-Geometric Backgrounds", based on [22,23], slides, abstract Bayrischzell Workshop 2014 on quantized geometry and physics, 23rd - 26th May 2014, Bayrischzell, Germany 20. "Stringy Geometries in the Context of Double Field Theory", based on [23], slides, abstract Science Week 2013, 2nd - 5th December 2013, Max-Planck Institut for Extraterrestrial Physics, Garching, Germany 21. "Non-commutative IIA and IIB geometries from Q-branes and their intersection", based on [25], slides, abstract ## Invited seminar talks 22. "$\alpha'$-corrections and Generalised Dualities", based on [5,4], abstract blackboard Higher derivative corrections in gravity and gauge theories are key to approach some of the big questions in theoretical physics, like the evolution of the universe or the resolution of black hole singularities. I explain why they are so important but also why they are very hard to obtain. Dualities, which for example arise naturally in string theory, have recently provided a powerful tool to get a better handle on these corrections. Unfortunately the most well understood duality in this context, abelian T-duality, is very restrictive. There is the more versatile framework of generalised T-dualities with a wide range of applications, but its relevance for higher derivative corrections just started to emerge last year. We discuss these new developments and point out some of their applications. ExFT Journal Club, 18th May 2021 virtual 23. "O($D$,$D$)-covariant $\beta$-functions", based on [4,3], slides, abstract Symmetries play a central role in theoretical physics. But, we have to understand how they are realized in order to benefit from them in calculations. An interesting example along this line is a large class of two-dimensional quantum field theories, called $\sigma$-models. I will show that they exhibit a hidden, continues symmetry which is governed by the Lie group O($D$,$D$). It tightly constrains both, the classical and the quantum regime of these models. As an example, I will demonstrate how it affects the one- and two-loop $\beta$-functions. The resulting insights are indispensable to compute the RG flows of integrable $\sigma$-models. Mathematical Physics seminar, 22nd January 2021, University of York, York, UK virtual 24. "One- and two-loop RG flows of integrable E-models", based on [3,4], abstract blackboard Abstract: There is an intriguing connection between integrable $\sigma$-models and Poisson-Lie (PL) symmetry. As I will review, the latter is manifest in the $\mathcal{E}$-model, rendering it a powerful tool to construct a variety of integrable models which recently have been identified with surface defects in 4d Chern-Simons theory. Manifest PL symmetry facilitates computations which would be forbiddingly complex without it. Important examples, which I will discuss in detail, are one and two-loop beta-functions. We will see that they underpin a deep connection between classical integrability and the corresponding quantum regime. 25. "Quantum corrections for generalised T-dualities", based on [4], slides, abstract S- and (abelian) T-duality play a central role in string theory, but their scope is limited to highly constrained spacetimes. Generalised T-dualities, which include non-abelian and Poisson-Lie T-duality, apply to a significantly larger class of target spaces with a wide range of applications. Classically they are on an equal footing with abelian T-duality, but their quantum corrections are much more mysterious and mostly unexplored. I will review the main problems which have to be solved to make progress in this direction. Afterwards, I demonstrate how recently explored connections between generalised T-dualities, double field theory, and consistent truncations allow to prove that two-loop RG flows are preserved under these dualities. We will discuss the implications of this result and emphasise how the intriguing mathematical structures that govern all the current\ applications influence quantum corrections in a highly non-trivial way. Exceptional Geometry Seminar Series, 2nd October 2020 virtual 26. High Energy Physics Theory Group Seminar, 24th March 2020, University of Oviedo, Oviedo, Spain canceled 27. "Generalised T-dualities, Integrable $\sigma$-models and Supergravity", based on [13,10,9,7], abstract blackboard Abelian T-duality is well appreciated among string theorists. But it forms only a very small part of a much larger family of generalised T-dualities which however attracted much less attention until recently. The main reason for their marginalization is that in contrast to abelian T-duality they are not a symmetry of full string theory, i.e. they do not hold for the full $\alpha’$ and $g_s$ expansion. Still they turned out to be essential in the recent quest for integrable $\sigma$-modes and also in the construction of explicit solutions to supergravity. Thus, I want to present you the idea behind generalised T-dualities on the worldsheet and show how they are connected to integrability and supergravity. High-Energy Theory Seminar, 13th January 2020, Mitchell Institute, Texas A&M, College Station, USA 28. 29. "Generalised Quotients", based on [7], abstract blackboard Generalised Scherk-Schwarz reductions are a powerful tool to construct consistent truncations in Double and Exceptional Field Theories. Recently, it turned out that they are also closely related to Poisson-Lie T-duality. However, the most general form of Poisson-Lie T-duality, the dressing coset construction, can not be implemented in terms of a generalised Scherk-Schwarz ansatz. I will show that implementing it in generalised geometry leads to a natural extension of the generalised Scherk-Schwarz ansatz which comes with many new features: 1) Partial or full breaking of SUSY which allows to find many new examples of generalised Kähler or Calabi-Yau Manifolds. 2) Singular backgrounds with localised sources. 3) Localised vector multiplets while still resulting in consistent truncations. • Group Seminar, 19th December 2019, Arnold Sommerfeld Center for Theoretical Physics, Munich, Germany • CRST Seminar, 11th December 2019, Centre for Research in String Theory (CRST), Queen Mary University of London, London, UK 30. 34. "Poisson-Lie T-Duality in Double Field Theory" (with DFT introduction), based on [13], slides, abstract A formulation of Double Field Theory is presented which makes Poisson-Lie T-duality manifest. It allows to identify the doubled space with a Drinfeld double and provides a powerful tool to extract the transformation of the metric, B-field, dilaton and R/R potentials under Poisson-Lie T-duality. 31. 35. "Integrability, Poisson-Lie Symmetry and Double Field Theory", based on [13,20,10], slides, abstract I review how integrability allows us to explore the planar limit of the AdS/CFT correspondence for arbitrary values of the t'Hooft coupling. In string theory integrability of the 2D $\sigma$-model is closely related to Poisson-Lie Symmetry. Double Field Theory can be used to make this symmetry manifest and therewith provides a new tool to study the implications for the gravity side of the correspondence. 36. "Double Field Theory", slides, abstract String theory's secret in approaching an UV complete theory of gravity is the transition from point particles to extended objects. A direct consequence is the emergence of dualities, like T-, S- and U-duality. They have numerous applications and are essential for studying string theory, but for me the most intriguing question is: "Can they teach us more about (quantum) gravity?". My talk gives a short introduction to double / exceptional field theory, a framework which is exactly motivated by this question. Instead of presenting all the formal details, I will try to focus on the underlying ideas and important applications, like the construction of new flux vacua or consistent truncations of supergravity. Finally, I explain some open questions which are currently under active research and motivate why their answers could also be beneficial for other research areas, like the gauge gravity correspondence. OIST Seminar, 9th January 2018, Okinawa Institute of Science and Technology Graduate University, Japan 37. "Poisson-Lie T-duality in Double Field Theory", based on [13], slides, abstract Poisson-Lie T-duality is a generalization of traditional non-abelian T-duality and enjoys, at least at the classical level, all features of abelian T-duality. Recently, it got a lot of attention in the context of integrable $\eta$- and $\lambda$-deformations. I review this intriguing framework, outline its applications and show that it naturally admits a double field theory description. Latter has many applications in the realm of abelian T-duality but did not really touch Poisson-Lie T-duality until now. I present an extension of the current double field theory formulation which makes Poisson-Lie T-duality manifest. It gives rise to various new applications and also introduces powerful new mathematical structures, like Drinfeld doubles and quantum groups, in the theory. YITP Seminar, 9th November 2017, C.N. Yang Institute for Theoretical Physics, Stony Brook, USA 38. 40. "Generalized Parallelizable Spaces, Consistent Truncations and Dualities", based on [13,14], slides, abstract While only three spheres, $S^1$, $S^3$ and $S^7$, are parallelizable, it was recently shown that all spheres are generalized parallelizable. In addition to its beautiful mathematical structure, this extended notion of parallelizability allows to identify certain maximal gauged supergravities as consistent truncations of 10/11D supergravity. In this talk, I present the first systematic construction of generalized parallelize spaces and demonstrate how this string theory inspired concept captures T-duality in a natural way. 41. "Double Field Theory - Double Fun?", based on [19,20,21], slides, abstract Dualities are at the heart of string theory. They identify closed string theories in different target spaces (T-duality) and at inverse values of the string coupling (S-duality). To exploit their full potential, it would be great to make them manifest already at the level of the low energy effective action. Double Field Theory (DFT) exactly follows this idea for T-duality on a torus. Incorporating also S-duality in this framework, Exceptional Field Theory (EFT) is born. I review these two approaches and show how they are expect to give new insights in subject like flux compactifications, moduli stabilization and cosmology. I emphasize their weak spots and present my humble contribution to deal with some of these problems. ISCAP Seminar, 3rd March 2016, Columbia University, New York, USA 42. "Exploring Stringy Geometries with Double Field Theory", based on [19,20,21], slides, abstract Only a fraction of the vast landscape of vacua in string theory is accessible from supergravity. Stringy geometries, whose properties are governed by the extended nature of the string, are beyond its scope. Double Field Theory (DFT), which makes T-duality on tori manifest at the level of an effective field theory, provides a convenient tool to explore vacua beyond the supergravity regime. Despite the substantial progresses made in this direction, there are still open questions and technical ambiguities. Some of them can be solved by extending the derivation of DFT from a torus to more general non-abelian group manifolds. After a short review of the existing formalism, I derive such a theory, called DFT$_{\mathrm{WZW}}$, using Closed String Field Theory applied to Wess-Zumino-Witten models. Further, I discuss its connection to half maximal gauged supergravities in lower dimensions. Even for those of them who can not be generated by a supergravity compactification, DFT$_{\mathrm{WZW}}$ provides a higher dimensional origin. High-Energy Theory Seminar, 19th October 2015, Mitchell Institute, Texas A&M, College Station, USA 43. 50. "String Geometry Beyond the Torus", based on [21], slides, abstract Besides propagating in target space like a point particle, a closed string is also able to wind around non-contractible circles. A direct consequence thereof is T-duality. In the textbook example, it identifies the closed string dynamics on a large and a small circle by interchanging its winding and momentum modes. Patching a background by such dualities clearly goes beyond the notion of conventional geometry. However, there are extensive efforts to embed them into a framework called string geometry. It provides access to a vast number of new backgrounds with intriguing phenomenology, like e.g. the possibility to obtain de Sitter vacua. Double Field Theory (DFT) is the most promising approach to describe these backgrounds and their properties. But still, it is closely related to the torus. I will present a theory based on Closed String Field theory starting from a Wess-Zumino-Witten model which goes beyond the torus. It plays an important role in clarifying the recent confusion about different constraints in DFT. Furthermore it allows to uplift a large class of new backgrounds to string theory. These backgrounds are not T-dual to any geometric ones. ## Other conferences, workshops and schools attended 51. Integrability, Dualities and Deformations (conference), 30th August - 3rd September2021, Hospedería San Martín Pinario, Santiago de Compostela, Spain virtual 52. Intergrability, Dualities and Deformations (school), 23rd - 27th August 2021, Hospedería San Martín Pinario, Santiago de Compostela, Spain virtual 53. Junior Duality and Integrability Workshop, 16th - 18th February 2021 virtual 54. Integrable effective field theories and their holographic descriptions, 16th - 18th December 2019, Galileo Galilei Institute for Theoretical Physics, Florence, Italy 55. Geometry and Duality, 2nd - 6th December 2019, Max Planck Institute for Gravitational Physics (Albert Einstein Institute), Potsdam, Germany 56. Holography, Generalied Geometry and Duality , 6th - 10th May 2019, Mainz Institute for Theoretical Physics, Mainz, Germany 57. 20 Years Later: The Many Faces of AdS/CFT, 31st October 2017, Princeton University, Princeton, USA 58. 18th southeastern regional mathematical string theory meeting, 7th October 2017, Virgina Tech, Blacksburg, USA 59. SFT@HIT, 23rd - 25th June 2017, Holon Institute of Technology, Holon, Israel 60. Pre-Strings 2017: Advanced Strings School, 18th - 22nd June 2017, Technion, Haifa, Israel 61. Physics and Geometry of F-Theory 2017, 27th February - 2nd March2017, ICTP, Trieste, Italy 62. 16th southeastern regional mathematical string theory meeting, 13th - 14th November 2016, NC State Universtiy, Raleigh, USA 63. 15th southeastern regional mathematical string theory meeting, 23rd April 2016, Virgina Tech, Blacksburg, USA 64. Symposium on Quantum Fields and Strings, 1st April 2016, The Graduate Center, New York, USA 65. F-Theory at 20, 22nd - 26th February 2016, Burke Institute, Caltech, Pasadena, USA 66. Symposium on Quantum Fields and Strings, 4th December 2015, The Graduate Center, New York, USA 67. Physics and Geometry of F‑Theory 2015, 23rd - 26th February 2015, Max Planck Institute for Physics Munich, Germany 68. 2014 Arnold Sommerfeld School on ''Strings and fundamental physics'', 11th - 22nd August 2014, Arnold Sommerfeld Center for Theoretical Physics, Munich, Germany 70. Workshop on Noncommutative Field Theory and Gravity, 8th September 2013, Mon-Repos, Corfu, Greece 71. IMPRS Young Scientist Workshop at Ringberg Castle, 22nd - 26th July 2013, Ringberg Castle, Tegernsee, Germany 72. 2007 SBMO/IEEE MTT-S International Microwave and Optoelectronics Conference (IMOC), 29th October - 1st November2007, Pestana Bahia Hotel, Salvador, Brazil ## Oh, the Places You'll Go! I could not resist putting pins on the map for all the places I have visited. ## Traveling As an aviation enthusiast, I try to leave planes as one of the last passengers and get a peek in the cockpit if possible. Usually, the pilots I met are very nice and take the time to chat about aviation and live in general. For example, I met a guy who got a PhD in biochemistry before he started to fly commercial airliners. On Ryanair Air flight FR 2834 from Barcelona to Brussels, I was allowed to take the caption's seat of a Boing 737-800.
2023-02-08 14:34:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 28, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3973829448223114, "perplexity": 1698.6015298735463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500813.58/warc/CC-MAIN-20230208123621-20230208153621-00258.warc.gz"}
https://physics.stackexchange.com/questions/129836/coupled-spring-system-3-mass-3-springs
# Coupled Spring System (3 mass 3 springs) Hello I am having trouble trying to find the correct model for this coupled spring system. The scenario is the following we have: Ceiling - Spring - Mass(1) - Spring(2) - Mass(2) - Spring (3) - Mass(3) End. I came up with the following system of differential equations in the 2nd order to model this problem. $x_1^{''}=[-k_1x_1-k_2(x_2-x_1)-k_3(x_3-x_2)]/m_1$ $x_2^{''}=[-k_2(x_2-x_1)-k_3(x_3-x_2)]/m_2$ $x_3^{''}=-k_3(x_3-x_2)/m_3$ Is this the correct model? Afterwards I am trying to linearize these equations into 6 differential equations that I can input in matlab and plot the position of each spring. So I linearized them and obtained the following: $y_1^{'}=y_2$ $y_2^{'}=(-k_1y_1-k_2(y_3-y_1)-k_3(y_5-y_3)/m_1$ $y_3^{'}=y_4$ $y_4^{'}=(-k_2(y_3-y_1)-k_3(y_5-y_3)/m_2$ $y_5^{'}=y_6$ $y_6^{'}=(-k_3(y_5-y_3))/m_3$ I am not sure if this is correct or not. When I plot them in matlab I dont get a sinusoidal wave. A big plus if you guys can tell me how I could animate this system in matlab so that I can see the change in position in all three of the springs. • You aren't expecting simple sinusoids except in a few special cases. – dmckee Aug 7 '14 at 2:53 • What do you mean? So the model is correct? – adam Aug 7 '14 at 3:02 • I have no idea if the model is correct or not, but not finding simple sinusoids does not, in and of itself, point to a bug. There are a few regular modes, but finding them is an eigenvalue problem and I am unsure if you know that term and what it implies. – dmckee Aug 7 '14 at 3:44 • I know what eigenvalues are but ususally spring mass systems eigenvalues are complex. – adam Aug 7 '14 at 3:47 • IIRC, this example is solved in Taylor's Classical Mechanics. – jinawee Aug 7 '14 at 8:50 From the free body diagram you must have \begin{align} m_1 \ddot{x}_1 &= F_1 - F_2 \\ m_2 \ddot{x}_2 &= F_2 - F_3 \\ m_2 \ddot{x}_3 &= F_3 \end {align} with the spring forces defined as \begin{align} F_1 & = -k_1 x_1 \\ F_2 & = -k_2 (x_2-x_1) \\ F_3 & = -k_3 (x_3-x_1) \end{align} The above is combined as $$\begin{bmatrix} m_1 & 0 & 0 \\ 0& m_2 & 0 \\ 0 & 0 & m_3 \end{bmatrix} \begin{pmatrix} \ddot{x}_1 \\ \ddot{x}_2 \\ \ddot{x}_3 \end{pmatrix} = - \begin{bmatrix} k_1 + k_2 & -k_2 & 0 \\ -k_2 & k_2 + k_3 & -k_3 \\ 0 & -k_3 & k_3 \end{bmatrix} \begin{pmatrix} {x}_1 \\ {x}_2 \\ {x}_3 \end{pmatrix}$$ Which I think matches your equations (you have to check). To make an ODE out of this you need a state vector $$y = \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ v_1 \\ v_2 \\ v_3 \end{pmatrix}$$ and its derivative $$\dot{y} = A\,y$$ $$\begin{pmatrix} \dot{x}_1 \\ \dot{x}_2 \\ \dot{x}_3 \\ \dot{v}_1 \\ \dot{v}_2 \\ \dot{v}_3 \end{pmatrix} = \begin{bmatrix} 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ - \frac{k_1+k_2}{m_1} & \frac{k_2}{m_1} & 0 & 0 & 0 & 0 \\ \frac{k_2}{m_2} & - \frac{k_2+k_3}{m_2} & \frac{k_3}{m_2} & 0 & 0 & 0 \\ 0 & \frac{k_3}{m_3} & - \frac{k_3}{m_3} & 0 & 0 & 0 \end{bmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ v_1 \\ v_2 \\ v_3 \end{pmatrix}$$ As long as $\ddot{x}_i \propto - x_i$ there would be a harmonic response. If you not seeing this, then there is something wrong on how you are using ode45().
2019-07-24 04:20:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9895737767219543, "perplexity": 611.9221718991928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195530385.82/warc/CC-MAIN-20190724041048-20190724063048-00086.warc.gz"}
https://brilliant.org/problems/exponents-or-is-it-indices/
# Exponents (or is it indices?) Algebra Level 3 Find the sum of all real solutions to the equation $$2^x + 3^x +6^x = x^2$$ ×
2017-03-28 00:47:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8471946716308594, "perplexity": 2797.5091102459273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189583.91/warc/CC-MAIN-20170322212949-00442-ip-10-233-31-227.ec2.internal.warc.gz"}
https://help.photosynq.com/desktop-application/connect-an-instrument.html
# # Connect an Instrument You can use Bluetooth or USB to connect your Instrument with your computer. Depending on the Instrument and computer, some connection options may not be available. Note Before connecting your MultispeQ to the Desktop App you need to turn on the MultispeQ by pressing and holding the power button for at least 5 seconds. ## # USB connection 1. Select Settings from the left menu bar. 2. Choose the Connection tab from the settings menu. 3. Pick the port the Instrument is connected to from the dropdown menu: • Windows: COM{number} • macOS: usbmodem{number} • Linux: ACM{number} 4. Connect the instrument by clicking on Connect. ## # Bluetooth connection 1. Make sure you have your Instrument connected to your computer through your OS preferences. The code for pairing is 1234. 2. Select Settings from the left menu bar. 3. Choose the Connection tab from the dialog. 4. Pick the port the Instrument is connected to from the dropdown menu: • Windows: COM{number} • macOS: Instrument-name_{number} • Linux: Not available 5. Connect the device by clicking on Connect. ### # Troubleshooting If you are having trouble connecting to your Instrument, please go through this checklist first: • Make sure your Instrument is fully charged (at least 6h, or over night). • Make sure you have turned on the Instrument by pressing and holding the power button for at least 5 seconds. The Instrument will automatically shut off after 3 hours of inactivity by default. • If you were using the Instrument with another mobile device, you will need to press and hold the power button for 5 seconds to disconnect it from the current device and make it available to the new one. • If you are using Windows 8 or lower, make sure you have the serial driver (opens new window) installed.
2022-06-27 15:48:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.268840491771698, "perplexity": 3607.928540259908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103334753.21/warc/CC-MAIN-20220627134424-20220627164424-00131.warc.gz"}
http://www.docsford.com/document/4138539
DOC # ASSESSMENT INSTRUCTIONS By Joanne Sanchez,2014-10-17 09:50 13 views 0 ASSESSMENT INSTRUCTIONS ASSESSMENT GUIDE PART I: THE COLLECTION OF INFORMATION ON PENNSYLVANIA’S DEATH PENALTY SYSTEM INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 The Death Penalty Moratorium Implementation Project . . . . . . . . . . . . . . . . . . . . . 5 The European Commission Grant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Grant Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Grant Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Information Session and Assessment Team Binder(s) . . . . . . . . . . . . . . . . . . . . . . . . 7 The Purpose of the Assessment Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Role of the Assessment Teams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Assessment Team’s Objective for Part I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Assessment Team’s Objective for Part II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Anticipated Timeline, Drafting Process, and Structure of the Assessment Report . 10 CRIME DATA, DEATH ROW, DNA, AND THE LOCATION AND PRESERVATION OF INFORMATION AND EVIDENCE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Crime Data and Death Row Demographics in Your State . . . . . . . . . . . . . . . . . . . . 11 Overviews of Reports and Studies Conducted About Your State . . . . . . . . . . . . . . . 14 DNA Testing and the Location and Preservation of Information and Evidence… . 14 LAW ENFORCEMENT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 CRIME LABORATORIES AND MEDICAL EXAMINERS . . . . . . . . . . . . . . . . . . . . . 22 PROSECUTORS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 DEFENSE SERVICES DURING TRIAL, APPEAL, STATE POST- CONVICTION PROCEEDINGS, AND FEDERAL HABEAS CORPUS . . . . . . . . . . . 30 Statewide Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 County-Based Public Defender and Private Law Firms . . . . . . . . . . . . . . . . . . . . . . 30 2 Appointment, Qualifications, and Training of Trial, Appeal, State Post-Conviction Proceedings, and Federal Habeas Corpus Counsel . . . . . . . . . . . . 30 Counsel Performance During Trial, Appeal, State Post-Conviction Proceedings, and Federal Habeas Corpus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Workload of Trial, Appellate, State Post-Conviction Relief Counsel, and Federal Habeas Corpus Counsel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Compensation for Trial, Appellate, State Post-Conviction Relief, and Federal Habeas Corpus Counsel and Paralegals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 The Availability and Use of Investigators, Mitigation Specialists, Mental Health Experts, and Other Experts Pre-Trial and During Trial, Appeal, State Post- Conviction Proceedings, and Federal Habeas Corpus . . . . . . . . . . . . . . . . . . . . . . . . 42 DIRECT APPEAL AND THE UNITARY APPEAL PROCESS . . . . . . . . . . . . . . . . . . 45 Non-Unitary Appeal States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Unitary Appeal States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 STATE POST-CONVICTION RELIEF AND FEDERAL HABEAS CORPUS . . . . . . 50 Grounds for State Post-Conviction Relief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Time Limitations, Contents of Petitions, and Second or Successive Petitions . . . . . 51 Stay of Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Evidentiary Hearings and Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Findings of Law and Fact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Standards Used When Reviewing State Post-Conviction Petitions . . . . . . . . . . . . . . 53 Retroactivity Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 CLEMENCY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Clemency Decisionmakers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Clemency Petitions and Clemency Interviews, Meetings, and/or Hearings . . . . . . 56 3 Types of Documents Provided to Death Penalty Clemency Decisionmakers Regarding Petition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Scope of Review Recommended and Scope of Review Actually Performed . . . . . . 58 Clemency Decisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Politics and Clemency Decisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Appointment and Compensation of Counsel During the Clemency Process . . . . . . 59 Investigators and Expert Witnesses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 JURY INSTRUCTIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 JUDICIAL INDEPENDENCE AND VIGILANCE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 RACIAL AND ETHNIC MINORITIES AND THE DEATH PENALTY . . . . . . . . . . . 71 JUVENILE OFFENDERS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 MENTALLY RETARDED AND MENTALLY DISABLED OFFENDERS . . . . . . . . 77 Mentally Retarded . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Mentally Disabled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Waivers of Rights (In General and as it Pertains to the Mentally Retarded, Mentally Disability, and individuals who received Ineffective Assistance of Counsel) . . . . . . 81 Competency to be Executed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 GLOSSARY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4 INTRODUCTION A. The Death Penalty Moratorium Implementation Project On February 3, 1997, the American Bar Association (―ABA‖), while taking no position on the death penalty per se, adopted a death penalty moratorium resolution, calling for capital jurisdictions to impose a moratorium on all executions until they can (1) ensure that death penalty cases are administered fairly and impartially, in accordance with due process, and (2) minimize the risk that innocent persons may be executed. Following the adoption of the resolution, the Section of Individual Rights and Responsibilities, as the original sponsor of the resolution, carried out much of the ABA’s moratorium implementation work. In Fall 2001, the ABA created the Death Penalty Moratorium Implementation Project (―the Project‖) to take over its moratorium efforts. The Project encourages other bar associations to press for moratoriums in their jurisdictions and encourages state government leaders to establish moratoriums and undertake detailed examinations of capital punishment laws and processes in their jurisdictions. The Project also collects and monitors data on domestic and international moratorium developments; conducts analyses of governmental and judicial responses to death penalty administration issues; and publishes periodic reports on moratorium developments. The Project is staffed by Deborah T. Fleischaker, Director, and Lindsay B. Glauner, Project Attorney, and is advised by a presidentally appointed Steering Committee, comprised of the following individuals: ; James E. Coleman, Jr. (Chair), Professor and Associate Dean, Duke University School of Law; ; Lauralyn Beattie, Counsel, Office of University Counsel, Georgetown University; ; Stephen B. Bright, Director, Southern Center for Human Rights; ; Zachary W. Carter, Partner, Dorsey & Whitney, LLP; ; W.J. Michael Cody, Partner, Burch, Porter & Johnson and former Tennessee Attorney General; ; Ruth Friedman, consultant; ; Thomas Lorenzi, Partner, Lorenzi, Sanchez & Rosteet, LLP; ; Charles J. Ogletree, Jr., Professor, Harvard Law School; ; Morris Overstreet, Professor, Texas Southern University Thurgood Marshall School of Law and former Justice on the Texas Court of Criminal Appeals; ; Cruz Reynoso, Professor, University of California at Davis School of Law and former Justice on the California Supreme Court; ; Thomas Sullivan, Partner, Jenner & Block and Co-Chair of the Illinois Governor’s Commission on the Death Penalty; and ; Denise Young, consultant. 5 B. The European Commission Grant 1. Grant Description In February 2003, the European Commission's European Initiative for Democracy and Human Rights (―EC‖) selected the Project to receive a two-year grant to examine the extent to which U.S. capital jurisdictions’ death penalty systems comport with minimum standards of fairness and due process. The objective of the grant is to conduct a preliminary assessment of U.S. death penalty systems, using as a benchmark the protocols set out in the Section of Individual Rights and Responsibilities’ 2001 publication, Death without Justice: A Guide for Examining the Administration of the Death Penalty in the United States ("the Protocols"). While the Protocols are not intended to cover exhaustively all aspects of the death penalty, they do cover eight key aspects of death penalty administration, including defense services, procedural restrictions and limitations on state post-conviction and federal habeas corpus, clemency proceedings, jury instructions, an independent judiciary, racial and ethnic minorities, juvenile offenders, and the mentally retarded and mentally ill. The findings of the preliminary assessments will serve as the bases from which to launch the more comprehensive self-examinations of death penalty-related laws and processes that the ABA is encouraging capital jurisdictions to undertake on their own. 2. Grant Organization The Project has created an Advisory Board to oversee the implementation of the grant. The Advisory Board is comprised of the following individuals: ; Talbot ―Sandy‖ D’Alemberte (Chair), Professor of Law and former Dean, Florida State University College of Law, and former President, Florida State University; ; John J. Gibbons, Partner, Gibbons, Del Deo, Dolan, Grigginger & Vecchione and former Chief Justice, U.S. Court of Appeals for the Third Circuit; ; Parris Glendenning, President, Smart Growth Leadership Institute, and former Maryland Governor; ; Fred Gray, Partner, Gray, Langford, Sapp, McGowan, Gray & Nathanson and former attorney for Rosa Parks and Martin Luther King, Jr.; ; Stephen F. Hanlon, Partner, Holland and Knight; ; Mario Obledo, President, National Coalition of Hispanic Organizations; ; Raymond Paternoster, Professor, Institute of Criminal Justice and Criminology, University of Maryland; ; Virginia Sloan, President, The Constitution Project; and ; Penny Wakefield, former Director, ABA Section of Individual Rights and Responsibilities. The Project Steering Committee, as well as the Project’s staff, will also provide advice and assistance in implementing the grant. The Project encourages your team 6 to contact its staff with any questions or concerns regarding the completion of the assessment. Each Assessment Team will be responsible for overseeing the assessment in its state. Each team will be chaired by a law school professor and will include or have access to current or former defense attorneys, current or former prosecutors, individuals active in the state bar, current or former judges, state legislators, and anyone else whom the Project and/or team leaders feel should be included to complete the assessment in a timely, thorough manner. The leaders of the Pennsylvania Assessment Team are Professors Michelle Anderson and Anne Poulin. A list of the other team members will be distributed at the initial team meeting. At a minimum, Assessment Team members will provide guidance during the research process and serve as reviewers of the report as it is being drafted. The Project anticipates that the Assessment Teams will hire students to collect the actual data. See infra section I(E) for a detailed discussion of the role of the Assessment Teams. C. Information Session and Assessment Team Binder(s) The Project will meet with your team to review the Assessment Guide, discuss your team’s objectives and the anticipated timeline, and address any questions and/or concerns regarding the Assessment Guide, team objectives, and/or timeline. At the meeting, the Project will also provide your team with a binder(s) of useful information. The binder(s) will contain the following: ; your state’s current death penalty statute; ; a list of individuals who have been on death row in your state since 1989; ; Death Row U.S.A. reports since 1989; ; a list of individuals who have been exonerated in your state since 1973; ; a list of cites of relevant statutes and articles; ; studies and reports on the administration of the death penalty in your state; ; law review and journal articles on the administration of the death penalty in your state; and ; news reports on the administration of the death penalty in your state. D. The Purpose of the Assessment Guide The Project designed the Assessment Guide to provide your team with a thorough background of the death penalty system and a detailed account of potential problems concerning such system. Given the detail of the Assessment Guide, the Project encourages your team to use it as a research guide, not as a checklist of required tasks. Thus, your team should not feel compelled to collect every recommended document or answer every question contained in the Assessment Guide. In fact, the Project believes that a thorough yet preliminary study of the death penalty system in your state still can be completed without collecting all of the recommended documents and/or answering all of the questions contained herein. 7 The Assessment Guide is divided into thirteen sections, not including the introduction: (1) Crime Data, Death Row, DNA, and the Location and Preservation of Information and Evidence; (2) Law Enforcement; (3) Crime Laboratories and Medical Examiners; (4) Prosecutors; (5) Defense Services During Trial, Appeal, State Post-Conviction Proceedings, and Federal Habeas Corpus; (4) Direct Appeal and the Unitary Appeal Process; (7) State Post-Conviction Relief and Federal Habeas Corpus; (8) Clemency; (9) Jury Instructions; (10) Judicial Independence and Vigilance; (11) Racial and Ethnic Minorities; (12) Juvenile Offenders; and (13) Mentally Retarded and Mentally Disabled Offenders. The thirteen sections are not organized by matter of importance, but rather, the order corresponds to the recommendations contained in the Protocols. The Project encourages your team to prioritize the sections, addressing the most potentially problematic areas first. The majority of the thirteen sections begin with an introduction, explaining the relevant substantive issues and identifying potential problems regarding these issues concerning the administration of the death penalty. Each section contains a list of documents, including laws, rules, procedures, standards, guidelines, and leading case law, that the Project recommends that your teams gather, review and explain, and a list of questions pertaining to these documents that should be answered by your team to the extent possible. Lastly, in certain sections, the Project has recommended that your team select at least two to three illustrative cases to highlight how certain systems in your state actually function. See infra section I(E)(1), for a further discussion on the collection and explanation of the recommended information and the case studies. E. Role of the Assessment Teams 1. Assessment Team’s Objective for Part I: The Collection of Information on Your State’s Death Penalty System Your Assessment Team’s objective is to collect the laws, rules, procedures, standards, guidelines, and leading case law recommended in the Assessment Guide. The Project recognizes that some of the recommended documents may not be easily accessible or may be entirely unattainable. As a result, the Project advises your team to use its best judgment to evaluate the potential value of a recommended document in view of the length of time and effort required to obtain it. Please explain each decision by your team not to attempt to obtain a recommended document or your team’s unsuccessful attempts to obtain a recommended document. The Project also recognizes that the list of recommended documents is not exhaustive. Thus, it welcomes additional research into the issues discussed herein. Following the collection of the recommended data, your team should review and explain the recommended laws, rules, procedures, standards, guidelines, and leading case law and draft answers to the questions to the extent possible. Your team’s answers should include detailed cites to the sources relied upon. Please note that when the Project recommends that your team review and explain the ―laws, rules, 8 procedures, standards, guidelines, and leading case law‖ on a particular subject, it does not want a recitation of every law, rule, procedure, standard, guideline, and case on the subject. Instead, the Project expects that your team will use its discretion to identify and explain the most pertinent laws, rules, procedures, standards, guidelines, and leading case law on each subject. The Project also recognizes that if your team is unable to obtain some of the recommended documents, it may be unable to answer certain questions. In these situations, please explain why your team was incapable of answering the questions. In addition to collecting the recommended information and answering the questions contained herein, the Project, in certain sections, has recommended that your team select and discuss at least two to three cases (designated as ―case studies‖) to illustrate how certain systems in your state actually function. The Project recommends either that your team identify one or two very knowledgeable individuals (who need not be team members) to select the illustrative cases or that your team jointly select the cases. When selecting the cases, please choose cases that are illustrative of the manner in which the system functions. Furthermore, although the Project has identified certain sections where case studies are recommended, your team should feel free to use or not use case studies, as it deems necessary. In contemplating case studies, your team should first determine one or two jurisdictions where courts are reasonably likely to provide the types of information desired. Then in the manner described above, select at least one or two cases from these jurisdictions. Thereafter, if your team has the time and inclination, it may select additional case(s) within other jurisdiction(s). Once your team begins explaining the information recommended in the Assessment Guide, drafting its answers to the questions, and completing the case studies, please send the Project updates, including drafts of your findings and information on your team’s progress. The updates also should include any and all documents collected by your team that the Project could not obtain on the Internet. All documents that are attainable on the Internet do not need to be sent to the Project as long as the documents are properly cited, allowing the Project to obtain the documents via the Internet. See infra section I(F), for a discussion on the drafting process and anticipated timeline. 2. Assessment Team’s Objective for Part II: Race and Geography Study The Project intends to work in conjunction with esteemed sociologists/criminologists to conduct race and geography studies in a number of states depending on various factors, including the existence of a recent, reliable race and geography study and the availability of the requisite data. The Project anticipates that some assistance by the Assessment Teams may be necessary to complete this portion of the assessment in a thorough yet timely manner. 9 F. Anticipated Timeline, Drafting Process, and the Structure of the Assessment Report Your Assessment Team will have approximately six to ten months to collect the recommended data and draft your findings. The Project expects that as your team collects the information recommended in the Assessment Guide, it will explain the recommended documents and draft answers to the questions, and submit updates to the Project with your findings. These updates are imperative to the drafting of the assessment report, as the Project will draft the report at the same time your team collects the recommended information. The Project envisions that each final assessment report will be comprised of two or three parts, depending on your state. The first section will provide an objective review of your state's death penalty system and will consist largely of a systematic recounting of laws, rules, procedures, standards, guidelines, and case law that comprise the current system. If the Project and its consultants conduct a race and geography study in your state, the second section will discuss the results. Lastly, the third section will compare the data collected by your team with the recommendations contained in the Protocols and recent ABA resolutions. Following the initial drafting of the assessment report, the Project, in coordination with your team, will edit and finalize the report. It is anticipated that the report will be completed approximately two months after your team has completed gathering the recommended data. The Project anticipates that all of the assessments will be completed by the summer of 2006; however, your state’s assessment may be completed significantly sooner than this. Upon completion of the assessments, the ABA will sponsor a national symposium, discussing death penalty issues, reviewing assessment findings, and highlighting trends found in the states. 10 Report this document For any questions or suggestions please email cust-service@docsford.com
2018-01-16 21:39:51
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8034241199493408, "perplexity": 279.9780500886587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886739.5/warc/CC-MAIN-20180116204303-20180116224303-00227.warc.gz"}
https://www.j-sens-sens-syst.net/8/49/2019/
Journal cover Journal topic Journal of Sensors and Sensor Systems An open-access peer-reviewed journal Journal topic J. Sens. Sens. Syst., 8, 49-56, 2019 https://doi.org/10.5194/jsss-8-49-2019 Special issue: Sensors and Measurement Systems 2018 J. Sens. Sens. Syst., 8, 49-56, 2019 https://doi.org/10.5194/jsss-8-49-2019 Regular research article 16 Jan 2019 Regular research article | 16 Jan 2019 # Novel radio-frequency-based gas sensor with integrated heater Novel radio-frequency-based gas sensor with integrated heater Stefanie Walter, Andreas Bogner, Gunter Hagen, and Ralf Moos Stefanie Walter et al. • Department of Functional Materials, University of Bayreuth, 95440 Bayreuth, Germany Abstract Up to now, sensor applications have rarely used materials whose dielectric properties are a function of the gas concentration. A sensor principle, by which this material effect can be utilized, is based planar radio-frequency sensors. For the first time, such a sensor was equipped with an integrated heater and successfully operated at temperatures up to 700 C. This makes it possible to apply materials that show gas-dependent changes in the dielectric properties only at higher temperatures. By coating the planar resonance structure with a zeolite, ammonia could be detected. The amount of ammonia stored in the sensitive layer can thereby be determined, since the resonant frequency of the sensor shifts with its ammonia loading. Desorption measurements showed a dependence of the storage behavior of the ammonia in the gas-sensitive layer on the operating temperature of the sensor. Thus, it was possible that by operating the sensor at 300 C, it only shows a gas-concentration-dependent signal. At lower operating temperatures, on the other hand, the sensor could possibly be used for dosimetric determination of very low ammonia concentrations. 1 Introduction Sensors with different measuring principles are available for gas concentration measurement. The evaluation of changes in electrical material properties due to the presence of the analyte to be detected is widely used. Most sensors currently determine these changes by measuring the resistance or the conductance of the sensitive material (Comini et al., 2009). For this purpose, a thin layer of the sensitive material can be applied on (interdigital) electrodes, as is usual, for example, with metal oxide sensors (Barsan et al., 2007). However, with this measuring method, the contact resistance between the electrodes and the sensitive material has a considerable effect on the measurement (Hoefer et al., 1995). In order to calculate this influence and to determine the resistance of the sensitive layer itself, a high-quality electrical connection between the electrodes and the sensitive material is necessary. Alternatively, four-wire planar interdigital electrodes can be used to eliminate the influence of the contact resistance (Hagen et al., 2013). An alternative measuring approach, which has been increasingly investigated in recent years, is based on the influence of sensitive materials on the propagation behavior of electromagnetic waves in the microwave frequency range around strip lines (Bailly et al., 2016a; Chahadih et al., 2015; Zarifi et al., 2015). Similar to resistive sensors, the sensitive materials can be applied in the form of layers on top of the strip lines. However, no electrical connection to the sensitive material is required (Zarifi et al., 2017). In addition, the evaluation of material parameters which have been rarely used for sensor applications so far (e.g., the dielectric properties, especially the permittivity) is now possible. This enables the use of a bunch of materials that have only minor resistive effects. For example, Rossignol et al. (2013) developed a resonance sensor with coplanar lines to detect ammonia and toluene at room temperature. For this purpose, a layer of cobalt phthalocyanine was deposited on a resonance structure (Rossignol et al., 2013). Since the sensor designs described in the literature often do not use high-temperature-resistant materials, this sensor principle has been used up to now mainly for gas detection at room temperature (Bailly et al., 2016b; Rydosz et al., 2016). In many materials, however, permittivity changes only occur at higher temperatures. For example, ceria, as found in three-way catalysts, changes its oxidation state and thus also its dielectric properties depending on the oxygen partial pressure. This effect is only measurable at temperatures above 300 C without any problems (Beulertz et al., 2015). Furthermore, for zeolites used in exhaust gas aftertreatment systems, an increasing permittivity change due to an ammonia concentration change was observed with increased temperatures (Dietrich et al., 2015). In a previous study, a planar microwave sensor with a microstrip ring as a resonance structure based on high-temperature-resistant materials was designed and investigated. By applying a zeolite layer over the resonance structure, the possibility of detecting ammonia at room temperature has already been demonstrated (Bogner et al., 2017). In order to enable the use of high-temperature active sensitive materials, the sensor is equipped with an integrated heating element in this study. 2 Principle of planar microwave sensors ## 2.1 Microstrip structure For radio-frequency sensor applications, microstrip structures are particularly suitable due to their planar structure. This ensures a simple and automatable production of the sensor devices and the sensitive materials can be easily applied, for example by screen printing or aerosol deposition (Hanft et al., 2018; Ménil et al., 2005). Microstrip lines consist of a thin conductor on the surface of a carrier substrate, with the opposite side serving as a ground plane by metallization. In contrast to other two-wire systems, such as coaxial cables, in microstrip lines the electromagnetic wave cannot propagate in an ideal transverse electromagnetic (TEM) mode. Caused by the line arrangement, the field propagation is not only limited to the carrier substrate between signal and ground conductor, but is also apparent in the medium above the signal conductor. Due to the permittivity differences between this and the carrier substrate, there are field components parallel to the direction of propagation of the guided electromagnetic wave (this corresponds to the direction of the microstrip line). However, owing to the low strength of these components, they can still be described by quasi-TEM modes (Pozar, 2012). Based on the field distribution, the electromagnetic wave and its propagation behavior in the microstrip line are influenced not only by the material properties of the carrier substrate but also by the material above it (e.g., by a gas-sensitive layer). ## 2.2 Sensing principle The propagation changes caused by the sensitive material above the microstrip line can be evaluated directly. However, in order to achieve high accuracy and sensitivity, it is advisable to excite a resonance structure and to conclude the dielectric properties of the sensitive material based on the changed resonance parameters (Chen, 2005). Planar microstrip resonators typically are designed in the form of ring or ribbon resonators. Excitation at certain frequencies (i.e., the resonant frequencies) by an external source causes a maximum of the energy input into the resonance structure and a standing electromagnetic wave occurs in the resonance structure. The resonant frequencies fres do not depend only on the dielectric properties of the surrounding materials, which are described by an effective permittivity εeff, but also on the characteristic length Lch of the resonance structure (Chang and Hsieh, 2004): $\begin{array}{}\text{(1)}& {f}_{\mathrm{res}}=n\cdot \frac{c}{{L}_{\mathrm{ch}}\sqrt{{\mathit{\epsilon }}_{\mathrm{eff}}}}.\end{array}$ For ring resonators, for example, the characteristic length can be easily determined by the circumference of the ring. At the occurring resonant frequencies, the characteristic length of the resonance structure corresponds to an integer multiple n of the electromagnetic wavelength (Chang and Hsieh, 2004). With the sensors described here, only the basic resonant mode (n=1) with the lowest frequency is analyzed. Due to the miniaturization of the sensor, this is approximately at 9 GHz. An additional evaluation of higher modes would require higher efforts with regard to the measurement technology. In contrast to the propagation behavior of the electromagnetic wave, the resonant frequency can easily be determined by measuring one of the scattering parameters of the resonant structure (e.g., the reflection coefficient S11) with a vector network analyzer (VNA). By applying a sensitive material above the resonance structure, the effective permittivity changes depending on the present gas concentration in this area and a frequency shift occurs. Zeolites, for example, can be used as a sensitive layer to detect ammonia. It has already been successfully demonstrated that the stored ammonia in zeolite-based selective catalytic reduction (SCR) catalysts can be determined using a radio-frequency-based technology. The excitation and measurement of the resonances was carried out in a contactless way by coaxial probe antennas in a cavity resonator, which is formed by the metallic catalyst canning (Dietrich et al., 2017). This was possible since the ammonia storage in the zeolite structure causes a change in the complex permittivity $\underset{\mathrm{¯}}{\mathit{\epsilon }}={\mathit{\epsilon }}^{\prime }-j{\mathit{\epsilon }}^{\prime \prime }$ (Rauch et al., 2017). As a result, the propagation of the electromagnetic wave and thus also the properties of the individual resonant modes (e.g., the resonant frequency) were directly influenced by the stored amount of ammonia. One challenge when using zeolites as sensor material is that ammonia can bind itself to the zeolite structure in two different ways (Di Iorio et al., 2015). Some of the ammonia is weakly bound, i.e., it can desorb by purging the zeolite with inert gas. Furthermore, it is known that the amount of weakly stored ammonia depends on the gas phase concentration, which results in the possibility of an ammonia concentration measurement by the radio-frequency sensor (Rauch et al., 2015). However, some of the ammonia is strongly bound, which means a complete desorption can only be achieved by adding nitrogen oxides or by heating the sample (Kubinski and Visser, 2008; Rauch et al., 2017). Due to this, the sensor may be applied as a gas dosimeter at very low ammonia concentrations. Since the ratio between the two bounding types is temperature-dependent, the sensor could be used either for concentration measurement or as a dosimeter, according to its operating temperature (Rodríguez-González et al., 2008). 3 Sensor design and methods In order to enable temperature-dependent desorption of strongly bound ammonia, the radio-frequency sensor should be able to operate up to temperatures of more than 600 C. In contrast to other planar resonance sensors described in the literature, a ceramic substrate (Al2O3; CeramTec Rubalit 710, thickness: 0.635 mm; ${\mathit{\epsilon }}^{\prime \prime }=\mathrm{10.1}$ at 1 GHz; tan δ=0.00022 at 1 MHz) with a purity of 99.6 % is used. Unlike other carrier substrates, such as FR4, this material has a high temperature stability and a low thermal expansion coefficient. The latter is necessary to avoid an influence on the sensor behavior due to changes of the resonance geometry at different operating temperatures. Furthermore, a higher signal quality can be expected than with other carrier substrates due to the lower dielectric loss factor. The resonance structure of the sensor examined in this study is designed as a microstrip line and consists of a straight excitation line to which the ring resonance structure is electromagnetically coupled via a thin gap. To ensure a small sensor design, the radius of the ring structure R is only 2.07 mm. Thus, the first resonant frequency occurs at about 9 GHz. A general determination of the dependency of the radio-frequency signal on the sensor geometry parameters was already carried out in a previous work using the FEM software COMSOL Multiphysics (Bogner et al., 2017). Based on these investigations, a conductor width W of 0.22 mm and a coupling gap width d of 150 µm were selected for the ring resonator structure (Fig. 1). The excitation line is designed for a characteristic impedance of 50 Ω. In order to couple the electromagnetic waves into the excitation line, an SMA socket (End Launch SMA Connector, Southwest Microwave) for connecting a coaxial cable is located at one end. Figure 1The geometry of the resonance structure. The microstrip structure is laser-structured from a 400 nm gold thin film on the sensor front side (LPKF Microline 350L, LPKF Laser & Electronics AG). To ensure better adhesion of the thin film, a 4 nm thick chromium layer is applied as an adhesion promoter beforehand. On the back side of the device, the ground plane required for microstrip lines is also deposited as an Au thin film. In addition, a thick-film heater structure in a four-wire setup is installed there. The sensor can thereby be heated in the area of the resonance ring and thus the sensitive layer can be kept at a constant temperature. To ensure the best possible heat transfer to the sensitive layer, the heating element is mounted using gold paste (DuPont 5744R). A disturbance of the sensor signal is prevented due to the ground plane between the heating element and the resonance structure. The design of the sensor is shown schematically as an exploded view in Fig. 2. Furthermore, Fig. 3 shows the sensor in side and top view. Figure 2Schematic design of the planar and heatable resonance sensor for ammonia detection. An iron zeolite is used for the sensitive layer. For this purpose, zeolite powder was obtained from an automotive PSA SCR catalyst, which has already been investigated in Dietrich et al. (2017) with regard to its radio-frequency characteristics. The powder was then mixed with a 1:11 mixture of ethyl cellulose and terpineol to form a thick film paste. This was applied as a layer with a thickness between 0.5 and 1.0 mm above the ring resonator and was fired at 600 C (Fig. 3b). Figure 3Side (a) and top view (b) of the sensor with the iron zeolite layer above the resonance structure. In order to determine the sensor behavior to different gases, it was installed in a glass chamber that can be flushed with various gas mixtures. Mass flow controllers allow defined concentrations of NH3, O2 and NO to be added to the N2 base gas. The total volume flow during the measurements was continuously 0.5 L min−1. The radio-frequency sensor was kept at a constant temperature by the integrated heating element. For this purpose, the dependency between the temperature of the sensitive layer and the heater resistance was determined before with a pyrometer outside of the glass chamber. Due to the risk of possible overheating of the installed SMA socket and the PTFE used in it, the sensor was operated at a maximum temperature of 300 C in the area around the sensitive layer. Since therefore a temperature-based removal of the strongly bound ammonia in the zeolite was not possible, regeneration was done by purging the sensor with a nitrogen-oxide–oxygen mixture. As a result, the stored ammonia is converted to nitrogen and water by the standard SCR reaction (Koebel et al., 2000): $\begin{array}{}\text{(2)}& \mathrm{4}{\mathrm{NH}}_{\mathrm{3}}+\mathrm{4}\mathrm{NO}+{\mathrm{O}}_{\mathrm{2}}\to \mathrm{4}{\mathrm{N}}_{\mathrm{2}}+\mathrm{6}{\mathrm{H}}_{\mathrm{2}}\mathrm{O}.\end{array}$ The frequency of the first resonant mode serves as the sensor signal. A vector network analyzer (Anritsu MS2820B) measures the magnitude of the reflection coefficient S11 in the range of 8 to 10 GHz approximately every 10 s. A Lorentz curve is fitted to these measured data. By means of the fit parameters, the resonant frequency, which corresponds to a minimum of the S11 coefficient can be determined. However, even with an unloaded zeolite a temperature dependence can be observed. This is caused by a changed resonance geometry due to the thermal expansion of the sensor as well as by the temperature-dependent permittivity of the sensor materials. In order to eliminate this cross-sensitivity to the sensor signal, only the relative frequency shift $\mathrm{\Delta }f/{f}_{\mathrm{r},\mathrm{0}}$ related to the resonant frequency of the unloaded sensor in a nitrogen atmosphere fr,0 will be considered. 4 Results After Bogner et al. (2017) has already shown that the sensor signal reacts sensitively to ammonia at room temperature, but is also strongly influenced by the water content of the gas atmosphere, the sensor is now operated at temperatures of over 100 C to reduce its water adsorption capability of the zeolite (Srinivasan et al., 2017). Now, an investigation will be made into how ammonia is stored in the gas-sensitive zeolite layer and how this influences the resonance behavior of the sensor. ## 4.1 Temperature dependence of the ratio between strongly and weakly bound ammonia First, the amount of strongly and weakly bound ammonia of the total sensor signal is to be determined in dependence of the temperature of the sensitive layer. For this purpose, the sensor was warmed up to temperatures between 200 and 300 C by the integrated heater. First, an ammonia concentration of 500 ppm was adjusted for 10 min to a nitrogen atmosphere. The ammonia dosing was then switched off and the sensor chamber was purged with nitrogen for 20 min to enable the weakly bound ammonia to desorb. Finally, 500 ppm NO and 5 % O2 were added to remove the ammonia still stored in the sensor. Figure 4Resonant frequency shift during adsorption as well as free and (by addition of NO) forced desorption of ammonia. Figure 4 shows the measured relative resonant frequency shift during ammonia loading and unloading. At all considered temperatures, the resonant frequency decreases after ammonia has been added. According to Eq. (1), this is caused by an increase in the permittivity of the sensitive layer. Thus, these measurements are consistent with the results of previous research (Dietrich et al., 2017; Rauch et al., 2017). The increase in sensor temperature results in a significantly lower resonant frequency shift. While at 200 C a gas phase concentration of 500 ppm NH3 leads to a change in the signal of 0.3 %, this decreases to 0.1 % at 300 C. The reason for this is the reduced ammonia storage capacity of zeolites at higher temperatures (Rauch et al., 2015). After switching off the ammonia dosing, a decrease of the sensor signal occurs immediately due to the release of weakly bound ammonia. Due to the temperature-dependent reaction kinetics, desorption takes place much faster at higher temperatures. Depending on the requirements, the radio-frequency sensor could be adjusted to a fast response to changes in ammonia concentration or to a high sensor signal by varying the operating temperature. Due to the strongly bound ammonia remaining in the sensitive layer, the same resonant frequency is not achieved at temperatures below 300 C as before ammonia dosing started. After free desorption has been completed, a clear temperature dependency is still apparent. However, as soon as the nitrogen-oxide–oxygen mixture has been added, the strongly bound fraction can leave the zeolite through the SCR reaction (Eq. 2) and the sensor signal returns to its original level. Figure 5Frequency shift caused by strongly and weakly bound ammonia at different sensor temperatures. Just before the end of ammonia dosing and the free desorption, the total sensor signal as well as the signal caused by strongly bound ammonia can be determined. From the difference between the two values, it is possible to calculate the frequency shift caused by the weakly bound ammonia. These three values are shown in Fig. 5 as a function of the sensor temperature. This clearly shows that a major part of the temperature dependence of the total signal is caused by the strongly bound ammonia. The weakly bound ammonia, on the other hand, is only slightly affected by the temperature. For instance, at 300 C the frequency shift during free desorption is almost completely decreased; at 200 C, however, the signal is still higher than 50 % of the maximum value measured during the NH3 adsorption. This means that the radio-frequency sensor can be operated at temperatures at which it can be probably used as a gas sensor due to the insignificant influence of strongly bound ammonia. ## 4.2 Geometry optimization for operation at higher temperatures As already mentioned in Sect. 3, to prevent possible overheating of the SMA connector, the sensor was only operated at temperatures up to 300 C. Although the temperatures were sufficient to completely reduce the frequency signal to values before ammonia loading, an external addition of nitrogen oxides was necessary to ensure that the strongly bound ammonia was actually completely removed. Figure 6Top view of the optimized sensor design. To avoid this in future work, significantly higher temperatures have to be made possible. In order to increase the thermal resistance between the sensitive layer and the radio-frequency connector, the carrier substrate geometry was adapted (Fig. 6). On the one hand, the width of the substrate in the area of the resonance structure was reduced to 6.35 mm. To enable the mounting of the SMA socket, the carrier substrate widens slightly to the connector end of the sensor. On the other hand, the length of the sensor has been extended to 60 mm. Hereby, the thermal resistance between the heated area of the sensor and the SMA socket could be increased considerably. Due to the resulting higher conductor losses of the longer excitation line and the resulting weaker sensor signal, the geometry of the resonance structure was also adapted by reducing the coupling gap width d from 150 to 75 µm. To determine the maximum operating temperature of the new sensor design, the entry setup was thermally simulated with the FEM software COMSOL Multiphysics. The simulation model considered not only the heat conduction in the sensor and connector but also natural convection to the ambient air, thermal radiation and the electrical heating by the integrated heater. Forced convection by a flowing gas was not included. This however, would increase heat dissipation and thus enable a higher maximum operating temperature. Figure 7a shows the heat distribution at a sensor temperature of approx. 700 C. Immediately behind the resonance structure, the temperature drops significantly. In stationary simulations, the connector temperature remains below 110 C (Fig. 7b). These simulations could be validated at the produced sensors using a pyrometer. Therefore, damage to the SMA socket does not occur even after prolonged operation. Figure 7(a) Simulated temperature distribution at a sensor temperature of approx. 700 C. (b) Simulated temperature of the SMA connector TSMA depending on the temperature of the sensitive layer TSensor. Before the behavior of the redesigned radio-frequency sensor is investigated in more detail with regard to possible gas sensor characteristics, the functionality and the reproducibility compared with the old sensor design should be checked first. For this purpose, the same experiment as described above and as shown in Fig. 4 was performed again in a temperature range from 100 to 300 C (Fig. 8). However, the strongly bound ammonia was no longer desorbed by the external addition of NO but by heating the sensor up to 600 C. Due to the adapted resonance geometry, an evaluation of the resonant frequencies was still possible without any problems. Figure 8Frequency shift caused by strongly and weakly bound ammonia at different sensor temperatures measured with the optimized sensor design. Overall, the results are very similar to the previous sensor design. The weakly bound fraction remains almost constant over the entire temperature range, while the strongly bound fraction decreases continuously and has nearly no influence on the total sensor signal at 300 C. However, compared with the previous design a weaker sensor signal is obtained. One reason for this may be the thickness of the sensitive layer, which was only reproducible in a limited extent due to the manual deposition of the zeolite paste. This could be considerably improved in future developments using screen printing technology. ## 4.3 Gas sensor behavior at higher temperatures In order to prove that the sensor signal is also dependent on the ammonia concentration in the gas phase, a further measurement was carried out, at which the sensor was operated at a temperature of 310 C. For this purpose, the sensor was exposed to an ammonia concentration of 100 ppm after a complete removal of the bounded ammonia and a short purging with nitrogen. After a stationary sensor signal had been reached, the gas phase concentration is gradually increased in steps of 100 ppm each up to a maximum of 500 ppm. The response time of the sensor until almost no change of the sensor signal occurred took approx. 1 min. After this, the ammonia dosage was switched off and the resonant frequency reached again the value before the start of dosage. This shows that at this sensor temperature no measurable amount of ammonia is strongly bound in the sensitive layer. Changes in the sensor signal are exclusively due to the storage of weakly bound ammonia. Figure 9Frequency shift caused by different ammonia concentrations at a sensor temperature of 310 C. Figure 9 shows the resonant frequency at the end of each desorption process (i.e., after a steady state has been reached) versus the ammonia concentration in the nitrogen base gas. Thereby, a linear characteristic of the sensor can be seen at the considered concentrations. Thus the measuring setup can be used as a gas sensor at temperatures above 310 C. By increasing the sensor temperature further, the storage and desorption rate of ammonia could be enhanced, as shown by the measurement in Fig. 4. This might considerably shorten the response time of the sensor. 5 Conclusion and outlook Previous planar radio-frequency sensors were only designed for operation at room temperature. In this work, the application of high-temperature active materials as the gas-sensitive layer was made possible for the first time with this sensor principle. For this purpose, a heating element was implemented in the sensor in such a way that the radio-frequency signal is not disturbed. Using FEM simulations, the geometry of the sensor substrate could be adapted in such a way that even during continuous operation at 700 C in the area of the resonance structure, no damage to temperature-sensitive radio-frequency components will occur. An Fe zeolite served as the sensitive layer for the first gas measurements, enabling the sensor to be used for ammonia detection. Depending on the operating temperature of the sensor, ammonia is bound differently in the sensitive layer. Thus, the amount of strongly bound ammonia, which remains in the layer even with decreasing partial pressure, drops with increasing temperature. At temperatures above 300 C, only weakly bound ammonia has an effect on the sensor signal, which makes it possible to use the radio-frequency sensor for gas concentration measurements. In further work, the radio-frequency sensor has to be characterized more precisely at operating temperatures that allow gas concentration measurement without the influence of strongly bound ammonia. Furthermore, the ability of the sensor for dosimetric detection of very low ammonia concentrations will also be evaluated. In addition, the use of various other sensitive materials with high temperature activity should be investigated. Possible applications include ceria for oxygen detection or materials that store nitrogen oxide, such as those used in NOx-storage catalysts. Data availability Data availability. All relevant data presented in the article are stored according to institutional requirements and as such are not available online. However, all data used in this paper can be made available upon request to the authors. Author contributions Author contributions. All authors planned the experiments. AB performed the experiments using the original sensor design. SW optimized the sensor and performed the experiments with the new sensor design. All authors evaluated the results and wrote the paper. Competing interests Competing interests. The authors declare that they have no conflict of interest. Special issue statement Special issue statement. This article is part of the special issue “Sensors and Measurement Systems 2018”. It is a result of the “Sensoren und Messsysteme 2018, 19. ITG-/GMA-Fachtagung”, Nürnberg, Germany, from 26 June 2018 to 27 June 2018. Acknowledgements Acknowledgements. The authors gratefully thank Dr. Jaroslaw Kita for manufacturing the sensor substrates and laser-patterning the microstrip structures. Edited by: Leonhard Reindl Reviewed by: two anonymous referees References Bailly, G., Harrabi, A., Rossignol, J., Stuerga, D., and Pribetich, P.: Microwave gas sensing with a microstrip interDigital capacitor: Detection of NH3 with TiO2 nanoparticles, Sens. Actuat. B, 236, 554–564, https://doi.org/10.1016/j.snb.2016.06.048, 2016a. Bailly, G., Rossignol, J., de Fonseca, B., Pribetich, P., and Stuerga, D.: Microwave Gas Sensing with Hematite: Shape Effect on Ammonia Detection Using Pseudocubic, Rhombohedral, and Spindlelike Particles, ACS Sens., 1, 656–662, https://doi.org/10.1021/acssensors.6b00297, 2016b. Barsan, N., Koziej, D., and Weimar, U.: Metal oxide-based gas sensor research: How to?, Sens. Actuat. B, 121, 18–35, https://doi.org/10.1016/j.snb.2006.09.047, 2007. Beulertz, G., Votsmeier, M., and Moos, R.: Effect of propene, propane, and methane on conversion and oxidation state of three-way catalysts: a microwave cavity perturbation study, Appl. Catal. B, 165, 369–377, https://doi.org/10.1016/j.apcatb.2014.09.068, 2015. Bogner, A., Steiner, C., Walter, S., Kita, J., Hagen, G., and Moos, R.: Planar Microstrip Ring Resonators for Microwave-Based Gas Sensing: Design Aspects and Initial Transducers for Humidity and Ammonia Sensing, Sensors, 17, 2422, https://doi.org/10.3390/s17102422, 2017. Chahadih, A., Cresson, P. Y., Hamouda, Z., Gu, S., Mismer, C., and Lasri, T.: Microwave/microfluidic sensor fabricated on a flexible kapton substrate for complex permittivity characterization of liquids, Sens. Actuat. A, 229, 128–135, https://doi.org/10.1016/j.sna.2015.03.027, 2015. Chang, K. and Hsieh, L.-H.: Microwave ring circuits and related structures, 2nd ed., Wiley-Interscience, Hoboken, NJ, 5–55, 2004. Chen, L.: Microwave electronics: Measurement and materials characterization, Wiley, Chichester, 300–309, 2005. Comini, E., Faglia, G., and Sberveglieri, G. (Eds.): Solid State Gas Sensing, Springer Science & Business Media LLC, Boston, MA, 2009. Dietrich, M., Rauch, D., Simon, U., Porch, A., and Moos, R.: Ammonia storage studies on H-ZSM-5 zeolites by microwave cavity perturbation: Correlation of dielectric properties with ammonia storage, J. Sens. Sens. Syst., 4, 263–269, https://doi.org/10.5194/jsss-4-263-2015, 2015. Dietrich, M., Steiner, C., Hagen, G., and Moos, R.: Radio-Frequency-Based Urea Dosing Control for Diesel Engines with Ammonia SCR Catalysts, SAE Int. J. Engines, 10, 1638–1645, https://doi.org/10.4271/2017-01-0945, 2017. Di Iorio, J. R., Bates, S. A., Verma, A. A., Delgass, W. N., Ribeiro, F. H., Miller, J. T., and Gounder, R.: The Dynamic Nature of Brønsted Acid Sites in Cu–Zeolites During NOx Selective Catalytic Reduction: Quantification by Gas-Phase Ammonia Titration, Top. Catal., 58, 424–434, https://doi.org/10.1007/s11244-015-0387-8, 2015. Hagen, G., Kita, J., Izu, N., Röder-Roith, U., Schönauer-Kamin, D., and Moos, R.: Planar platform for temperature dependent four-wire impedance spectroscopy – A novel tool to characterize functional materials, Sens. Actuat. B, 187, 174–183, https://doi.org/10.1016/j.snb.2012.10.068, 2013. Hanft, D., Glosse, P., Denneler, S., Berthold, T., Oomen, M., Kauffmann-Weiss, S., Weis, F., Häßler, W., Holzapfel, B., and Moos, R.: The Aerosol Deposition Method: A Modified Aerosol Generation Unit to Improve Coating Quality, Materials, 11, 1572, https://doi.org/10.3390/ma11091572, 2018. Hoefer, U., Steiner, K., and Wagner, E.: Contact and sheet resistance of SnO2 thin films from transmission-line model measurements, Sens. Actuat. B, 26, 59–63, https://doi.org/10.1016/0925-4005(94)01557-X, 1995. Koebel, M., Elsener, M., and Kleemann, M.: Urea-SCR: a promising technique to reduce NOx emissions from automotive diesel engines, Catal. Today, 59, 335–345, https://doi.org/10.1016/S0920-5861(00)00299-6, 2000. Kubinski, D. and Visser, J.: Sensor and method for determining the ammonia loading of a zeolite SCR catalyst, Sens. Actuat. B, 130, 425–429, https://doi.org/10.1016/j.snb.2007.09.007, 2008. Ménil, F., Debéda, H., and Lucat, C.: Screen-printed thick-films: From materials to functional devices, J. Eur. Ceram. Soc., 25, 2105–2113, https://doi.org/10.1016/j.jeurceramsoc.2005.03.017, 2005. Pozar, D. M.: Microwave Engineering, 4th ed., Wiley, Hoboken, NJ, 147–153, 2012. Rauch, D., Dietrich, M., Simons, T., Simon, U., Porch, A., and Moos, R.: Microwave Cavity Perturbation Studies on H-form and Cu Ion-Exchanged SCR Catalyst Materials: Correlation of Ammonia Storage and Dielectric Properties, Top. Catal., 60, 243–249, https://doi.org/10.1007/s11244-016-0605-z, 2017. Rauch, D., Kubinski, D., Cavataio, G., Upadhyay, D., and Moos, R.: Ammonia Loading Detection of Zeolite SCR Catalysts using a Radio Frequency based Method, SAE Int. J. Engines, 8, 1126–1135, https://doi.org/10.4271/2015-01-0986, 2015. Rodríguez-González, L., Rodríguez-Castellón, E., Jiménez-López, A., and Simon, U.: Correlation of TPD and impedance measurements on the desorption of NH3 from zeolite H-ZSM-5, Solid State Ionics, 179, 1968–1973, https://doi.org/10.1016/j.ssi.2008.06.007, 2008. Rossignol, J., Barochi, G., de Fonseca, B., Brunet, J., Bouvet, M., Pauly, A., and Markey, L.: Microwave-based gas sensor with phthalocyanine film at room temperature, Sens. Actuat. B, 189, 213–216, https://doi.org/10.1016/j.snb.2013.03.092, 2013. Rydosz, A., Maciak, E., Wincza, K., and Gruszczynski, S.: Microwave-based sensors with phthalocyanine films for acetone, ethanol and methanol detection, Sens. Actuat. B, 237, 876–886, https://doi.org/10.1016/j.snb.2016.06.168, 2016. Srinivasan, A., Joshi, S., Tang, Y., Wang, D., Currier, N., and Yezerets, A.: Development of a Kinetic Model to Evaluate Water Storage on Commercial Cu-Zeolite SCR Catalysts during Cold Start, SAE Tech. Pap., 2017-01-0968, https://doi.org/10.4271/2017-01-0968, 2017. Zarifi, M. H., Thundat, T., and Daneshmand, M.: High resolution microwave microstrip resonator for sensing applications, Sens. Actuat. A, 233, 224–230, https://doi.org/10.1016/j.sna.2015.06.031, 2015. Zarifi, M. H., Shariaty, P., Hashisho, Z., and Daneshmand, M.: A non-contact microwave sensor for monitoring the interaction of zeolite 13X with CO2 and CH4 in gaseous streams, Sens. Actuat. B, 238, 1240–1247, https://doi.org/10.1016/j.snb.2016.09.047, 2017. Special issue
2019-02-18 05:43:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6863240599632263, "perplexity": 2388.956100371998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484689.3/warc/CC-MAIN-20190218053920-20190218075920-00549.warc.gz"}
https://www.toktol.com/notes/context/2056/maths/bijections-and-compositions/composition-is-not-commutative
Use adaptive quiz-based learning to study this topic faster and more effectively. # Composition is not commutative Contrary to the properties of sums and products, order matters in composition. In general, $$f\circ g\ne g\circ f.$$ We say that composition is not commutative. Setting $f(x) = x^2$ and $g(x) = x+1$, we see that, for $x\ne0$, $$f\circ g(x) = (x+1)^2\ne x^2+1 = g\circ f(x).$$ However, for the inverse function, we have $$f\circ f^{-1}(y) =y,\qquad f^{-1}\circ f(x) =x$$ This comes from $y=f(x)\Longleftrightarrow x=f^{-1}(y)$. The composition of monotonic functions is monotonic. If both functions have same monotonicity (both increasing or decreasing), the composite function is increasing; otherwise it is decreasing. To see this, assume that both $f$ and $g$ are decreasing. Then, for $x\le y$, we have $g(x)\ge g(y)$, hence $f\left(g\left(x\right)\right)\le f\left(g\left(x\right)\right)$. This means that means that $f\circ g$ is increasing.
2018-01-20 21:27:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9489675164222717, "perplexity": 261.3101990548474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889733.57/warc/CC-MAIN-20180120201828-20180120221828-00371.warc.gz"}
https://socratic.org/questions/a-bullet-of-mass-15g-leaves-the-barrel-of-a-rifle-with-a-speed-of-800-m-s-if-the
# A bullet of mass 15g leaves the barrel of a rifle with a speed of 800 m/s. If the length of the barrel is 75 cm, what is the the average force that accelerates the bullet? Jul 4, 2015 The bullet is accelerated by a force of 6400 N. #### Explanation: In order to be able to determine the force that accelerates the bullet, you first need to determine the bullet's acceleration. You know what the length of the gun barrel and what the bullet's exit velocity are, which means that you can write v_"exit"^2 = underbrace(v_0^2)_(color(blue)("=0")) + 2 * a * l Solve the above equation for $a$ to get $a = {v}_{\text{exit}}^{2} / \left(2 \cdot l\right)$ a = (800""^2"m"^(cancel(2))/"s"^2)/(2 * underbrace(0.75cancel("m"))_(color(blue)("=75 cm"))) = "426,666.7 m/s""^2 This means that the force that's accelerating the bullet is equal to $F = m \cdot a$ $F = \underbrace{\text{0.015 kg")_(color(blue)("=15 g")) * 426,666.7"m"/"s"^2 = 6400underbrace(("kg" * "m")/"s"^2)_(color(blue)("=Newton")) = color(green)("6400 N}}$
2019-10-14 03:32:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6506608128547668, "perplexity": 533.1422062304617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986649035.4/warc/CC-MAIN-20191014025508-20191014052508-00353.warc.gz"}
https://wypozyczalnianamiotow.eu/fwav92r/linear-function-examples-0bfc04
Now plot these points in the graph or X-Y plane. Linear Functions. Known_x’s (required argument) – This is the independent array or range of data that is known to us. Is it always going to be 5? Click here for more information on our affordable subscription options. The first company's offer is … We are going to Example 1: Hannah's electricity company charges her $0.11 per kWh (kilowatt-hour) of electricity, plus a basic connection charge of$ 15.00 per month. Ok, let's move on! function lesson, you really aren't learning any new material. Write a linear function that models her monthly electricity bill as a function of electricity usage. Solution: Let’s write it in an ordered pairs, In the equation, substitute the slope and y intercept , write an equation like this: y = mx+c, In function Notation: f(x) = -(½) (x) + 6. notation, let's look at an example of how we must use function notation These functions have x as the input variable, and x is raised only to the first power. There are two different, but related, meanings for the term "linear function". Example Question #1 : Linear Equations With Money It costs $8 to enter the carnival, and then each ride costs$2 to ride. Graph the linear function f (x) = − 5 3 x + 6 and label the x-intercept. In other words, a function which does not form a straight line in a graph. R(x) is a revenue function. Let’s move on to see how we can use function notation to graph 2 points on the grid. Definition and Examples A function f is linear if it can be expressed in the form f ( x) =mx +b where m and b are constants and x is an arbitrary member of the domain of f.Often the relationship between two variables x and y is a linear function expressed as an Using the table, we can verify the linear function, by examining the values of x and y. that spiral effect? Examples, solutions, videos, and lessons to help Grade 8 students learn how to interpret the equation y = mx + b as defining a linear function, whose graph is a straight line; give examples of functions that are not linear. In Mathematics, a linear function is defined as a function that has either one or two variables without exponents. f(a) is called a function, where a is an independent variable in which the function is dependent. For example, the function C = 2 * pi * r is a linear function because only the C and r are real variables, with the pi being a constant. Combinations of linear equations. Your email address will not be published. Let's go through the steps with the help of an example: 1. f(x)=3x-1, solve for f(x)=8 The only difference is the function notation. It has one independent and one dependent variable. Graphing of linear functions needs to learn linear equations in two variables. If it's always going to be the same value, you're dealing with a linear function. means it progresses from one stage to the next in a straight Also, we can see that the slope m = − 5 3 = − 5 3 = r i s e r u n. Starting from the y-intercept, mark a second point down 5 units and right 3 units. R(x) = selling price (number of items sold) profit equals revenue less cost. Linear cost function is called as bi parametric function. Next lesson. A function which is not linear is called nonlinear function. Let’s draw a graph for the following function: F(2) = -4 and f(5) = -3. Solution: From the function, we see that f (0) = 6 (or b = 6) and thus the y-intercept is (0, 6). This is one of the trickier problems in the function unit. CBSE Previous Year Question Papers Class 10, CBSE Previous Year Question Papers Class 12, NCERT Solutions Class 11 Business Studies, NCERT Solutions Class 12 Business Studies, NCERT Solutions Class 12 Accountancy Part 1, NCERT Solutions Class 12 Accountancy Part 2, NCERT Solutions For Class 6 Social Science, NCERT Solutions for Class 7 Social Science, NCERT Solutions for Class 8 Social Science, NCERT Solutions For Class 9 Social Science, NCERT Solutions For Class 9 Maths Chapter 1, NCERT Solutions For Class 9 Maths Chapter 2, NCERT Solutions For Class 9 Maths Chapter 3, NCERT Solutions For Class 9 Maths Chapter 4, NCERT Solutions For Class 9 Maths Chapter 5, NCERT Solutions For Class 9 Maths Chapter 6, NCERT Solutions For Class 9 Maths Chapter 7, NCERT Solutions For Class 9 Maths Chapter 8, NCERT Solutions For Class 9 Maths Chapter 9, NCERT Solutions For Class 9 Maths Chapter 10, NCERT Solutions For Class 9 Maths Chapter 11, NCERT Solutions For Class 9 Maths Chapter 12, NCERT Solutions For Class 9 Maths Chapter 13, NCERT Solutions For Class 9 Maths Chapter 14, NCERT Solutions For Class 9 Maths Chapter 15, NCERT Solutions for Class 9 Science Chapter 1, NCERT Solutions for Class 9 Science Chapter 2, NCERT Solutions for Class 9 Science Chapter 3, NCERT Solutions for Class 9 Science Chapter 4, NCERT Solutions for Class 9 Science Chapter 5, NCERT Solutions for Class 9 Science Chapter 6, NCERT Solutions for Class 9 Science Chapter 7, NCERT Solutions for Class 9 Science Chapter 8, NCERT Solutions for Class 9 Science Chapter 9, NCERT Solutions for Class 9 Science Chapter 10, NCERT Solutions for Class 9 Science Chapter 12, NCERT Solutions for Class 9 Science Chapter 11, NCERT Solutions for Class 9 Science Chapter 13, NCERT Solutions for Class 9 Science Chapter 14, NCERT Solutions for Class 9 Science Chapter 15, NCERT Solutions for Class 10 Social Science, NCERT Solutions for Class 10 Maths Chapter 1, NCERT Solutions for Class 10 Maths Chapter 2, NCERT Solutions for Class 10 Maths Chapter 3, NCERT Solutions for Class 10 Maths Chapter 4, NCERT Solutions for Class 10 Maths Chapter 5, NCERT Solutions for Class 10 Maths Chapter 6, NCERT Solutions for Class 10 Maths Chapter 7, NCERT Solutions for Class 10 Maths Chapter 8, NCERT Solutions for Class 10 Maths Chapter 9, NCERT Solutions for Class 10 Maths Chapter 10, NCERT Solutions for Class 10 Maths Chapter 11, NCERT Solutions for Class 10 Maths Chapter 12, NCERT Solutions for Class 10 Maths Chapter 13, NCERT Solutions for Class 10 Maths Chapter 14, NCERT Solutions for Class 10 Maths Chapter 15, NCERT Solutions for Class 10 Science Chapter 1, NCERT Solutions for Class 10 Science Chapter 2, NCERT Solutions for Class 10 Science Chapter 3, NCERT Solutions for Class 10 Science Chapter 4, NCERT Solutions for Class 10 Science Chapter 5, NCERT Solutions for Class 10 Science Chapter 6, NCERT Solutions for Class 10 Science Chapter 7, NCERT Solutions for Class 10 Science Chapter 8, NCERT Solutions for Class 10 Science Chapter 9, NCERT Solutions for Class 10 Science Chapter 10, NCERT Solutions for Class 10 Science Chapter 11, NCERT Solutions for Class 10 Science Chapter 12, NCERT Solutions for Class 10 Science Chapter 13, NCERT Solutions for Class 10 Science Chapter 14, NCERT Solutions for Class 10 Science Chapter 15, NCERT Solutions for Class 10 Science Chapter 16, CBSE Previous Year Question Papers Class 12 Maths, CBSE Previous Year Question Papers Class 10 Maths, ICSE Previous Year Question Papers Class 10, ISC Previous Year Question Papers Class 12 Maths, f(a) = y coordinate, a=2 and y = 5, f(2) = 5. =FORECAST.LINEAR(x, known_y’s, known_x’s) The FORECAST.LINEAR function uses the following arguments: 1. If you studied the writing equations unit, you learned how to write equations given two points and given slope and a point. we will use the slope formula to evaluate the slope, Slope Formula, m = $$\frac{y_{2}-y_{1}}{x_{2}-x_{1}}$$ All these functions do not satisfy the linear equation y = m x + c. The expression for all these functions is different. The independent variable is x and the dependent one is y. P is the constant term or the y-intercept and is also the value of the dependent variable. Let’s draw a graph for the following function: How to evaluate the slope of a linear Function? P(x) is a profit function… use this same skill when working with functions. f is a linear function whose formula has the form. You already knew this skill, but it's coming back how to graph linear equations by finding the x-intercept and y-intercept. Then, the rate of change is called the slope. Solving one step equations. Linear Function Graph has a straight line whose expression or formula is given by;                                                       y = f(x) = px + qÂ. If you want to understand the characteristics of each family, study its parent function, a template of domain and range that extends to other members of the family. This formula is also called slope formula. Get access to hundreds of video examples and practice problems with your subscription! y = 2x + 5 with a = 2 and b = 5, y = -3x + 2 with a = -3 and b = 2, and y = 4x + - 1 with a = 4 and b = -1 are other examples of linear equations. Next we are going to take it one step further and find the slope of It looks like a regular linear equation, but instead of using y, the linear function notation is f(x) (spoken as 'f of x'). Solving Word Problems Using Linear Cost Function The equation for a linear function is: y = mx + b, Where: m = the slope , x = the input variable (the “x” always has an exponent of 1, so these functions are always first degree polynomial.). Linear equation. If two points in time and the total distance traveled is known the rate of change, also known as … Firstly, we need to find the two points which satisfy the equation, y = px+q. This is often written: (+) ′ = Example: y= –2x+4. It is generally a polynomial function whose degree is utmost 1 or 0.  Although the linear functions are also represented in terms of calculus as well as linear algebra. The expression for the linear equation is; where m is the slope, c is the intercept and (x,y) are the coordinates. Ok.. now that you know how to write an ordered pair from function Copyright © 2009-2020   |   Karin Hutchinson   |   ALL RIGHTS RESERVED. f(x)=b. Solving linear equations using cross multiplication method. Linear Functions A. Is it all coming back to you now? Linear Equation: A linear equation is an algebraic equation. We are going to use this same skill when working with functions. how to graph linear equations by plotting points. Known_y’s (required argument) – The dependent array or range of data. When x = 0, q is the coefficient of the independent variable known as slope which gives the rate of change of the dependent variable. Quadratic functions: y = ax … A linear function is a function which forms a straight line in a graph. Remember that in this particular In linear equation, … Let us see some examples based on these concepts. b = where the line intersects the y-axis. Solving quadratic equations by completing square. Find an equation of the linear function given f(2) = 5 and f(6) = 3. Pretty much any time your hear "_____ per _____" or "_____ for every _____" there is a linear equation involved as long as that rate stays constant. The following diagrams show the different methods to graph a linear equation. It is a function that graphs to the straight line. Solving quadratic equations by factoring. X (required argument) – This is a numeric x-value for which we want to forecast a new y-value. The most basic parent function is the linear parent function. In co-ordinate geometry, the same linear cost function is called as slope intercept form equation of a straight line. It can be used almost any place where a straight line is involved somehow. For example, for any one-step change in x, is the change in y always going to be 3? This can be a little tricky, but hopefully when you On this site, I recommend only one product that I use and love and that is Mathway   If you make a purchase on this site, I may receive a small commission at no cost to you. 0 energy points. The equation, written in this way, is called the slope-intercept form. Solution: Let’s rewrite it … Join the two points in the plane with the help of a straight line. 3. The examples of such functions are exponential function, parabolic function, inverse functions, quadratic function, etc. applying what you know about equations and simply stating your answer in Systems of linear equations word problems — Harder example. the graph for a linear function. The Identity Function. The slope of a line is a number that describes steepnessand direction of the line. Form the table, it is observed that, the rate of change between x and y is 3. Visit BYJU’S to continue studying more on interesting Mathematical topics. 3x – 2 = 2x – 3 is a linear equation If we put x = -1, then left hand side will be 3(-1) – 2 and right hand side will be 2(-1) – 3. Click here for more information on our Algebra Class e-courses. Ok, that was pretty easy, right? In this topic, we will be working with nonlinear functions with the form y = ax 2 + b and y = ax 3 b where a and b are integers. To solve a linear function, you would be given the value of f(x) and be asked to find x. For example, 5x + 2 = 1 is Linear equation in one variable. While in terms of function, we can express the above expression as; f(x) = a x + b, where x is the independent variable. A linear function is a function that has no exponents other than one and is without products of the variables for instance y=x+2, 2x-4y = 1/4 and y= -2, are all linear. Register for our FREE Pre-Algebra Refresher course. a much fancier format. In case, if the function contains more variables, then the variables should be constant, or it might be the known variables for the function to remain it in the same linear function condition. 2. For example, the rate at which distance changes over time is called velocity. Example No.2 . For example, if one company offers to pay you $450 per week and the other offers$10 per hour, and both ask you to work 40 hours per week, which company is offering the better rate of pay? function notation. There is a special linear function called the "Identity Function": f(x) = x. For the linear function, the rate of change of y with respect the variable x remains constant. But 5x + 2y = 1 is a Linear equation in two variables. Once the two parameters "A" and "B" are known, the complete function can be known. Not ready to subscribe? Worksheets for linear equations Find here an unlimited supply of printable worksheets for solving linear equations, available as both PDF and html files. A linear equation can help you figure it out! You can customize the worksheets to include one-step, two-step, or multi-step equations, variable on both sides, parenthesis, and more. In this article, we are going to discuss what is a linear function, its table, graph, formulas, characteristics, and examples in detail. $$\frac{-6-(-1)}{8-(-3)} =\frac{-5}{5}$$. Linear functions are very much like linear equations, the only difference is you are using function notation "f(x)" instead of "y". A linear functionis a function with the form f(x)=ax + b. A simple example of addition of linear equations. The graphs of nonlinear functions are not straight lines. Here m= –2 and so y′= –2. Linear function vs. If you studied the writing equations unit, you learned how to write Example 1: . The linear equation has only one variable usually and if any equation has two variables in it, then the equation is defined as a Linear equation in two variables. Types of Linear Equation: There are three types of linear equations … Knowing an ordered pair written in function notation is necessary too. Real life examples or word problems on linear equations are numerous. send us a message to give us more detail! You are Solving systems of linear equations — Harder example. Here the two parameters are "A" and "B". Your email address will not be published. Need More Help With Your Algebra Studies? Each type of algebra function is its own family and possesses unique traits. If we have two points: A=(x1,y1) B=(x2,y2) A slope (a) is calculated by the formula: a=y2−y1x2−x1 If the slope is equal to number 0, then the line will be paralel with x – axis. We will continue studying linear functions in the next lesson, as we have a lot to cover. in a different format. f(x) = a x + b. where a and b … A linear function has a constant rate of change. When we… On graphs, linear functions are always straight lines. Find the slope of a graph for the following function. If for each change in x--so over here x is always changing by 1, so since x is always changing by 1, the change in y's have to always be the … Graphing a linear equation involves three simple steps: See the below table where the notation of the ordered pair is generalised in normal form and function form. Linear Function Examples. Keep going, you are doing great! Current time:0:00Total duration:2:28. equations given two points and given slope and a point. C(x) is a cost function. In our first example, we are going to find the value of x when given a value for f(x). Examples of linear functions: f(x) = x, = R.H.S. Required fields are marked *, Important Questions Class 8 Maths Chapter 2 Linear Equations One Variable, Linear Equations In Two Variables Class 9. needs to learn linear equations in two variables. Solved Examples C(x) = fixed cost + variable cost. Often, the terms linear equation and linear function are confused. Real world linear equations in action as well as free worksheet that goes hand in hand with this page's real world ,word problems. If variable x is a constant x=c, that will represent a line paralel to y-axis. We obtained,-3 – 2= -2 – 3-5 = -5 Therefore, L.H.S. Let’s rewrite it as ordered pairs(two of them). really just a fancy notation for what is really the "y" variable. The expression for the linear function is the formula to graph a straight line. Solving quadratic equations by quadratic formula. y = mx + b 3x + 5y - 10 = 0 y = 88x are all examples of linear equations. Nature of the roots of a quadratic equations. how to graph linear equations using the slope and y-intercept. Landry only has time to ride 4 rides. This can be written using the linear function y= x+3. Systems of linear equations word problems — Basic example. Scroll down the page for more examples and solutions. Linear equations can be a useful tool for comparing rates of pay. Passport to advanced mathematics. Otherwise, the process is the same. Linear equations often include a rate of change. For example, the function A = s 2 giving the area of a square as a function of its side length is not linear … Some real world examples with corresponding linear functions are: To convert a temperature from Celsius to Fahrenheit: F = 1.8C + 32 To calculate the total monthly income for a salesperson with a base salary of $1,500 plus a commission of$400/unit sold: I = 400T + 1,500, where T represents the total … see this example, it will all make sense. Yes...now do you see how Math has This rate of change is the slope m. So m is the derivative. to graph two points on a grid. Linear equations can be added together, multiplied or divided. Example 3. The adjective "linear" in mathematics is overused. Linear functions happen anytime you have a constant change rate. So, x = -1 is the solution of given linear equation. And here is its graph: It makes a 45° (its slope is 1) It is called "Identity" because what comes out is identical to what goes in: The only thing Remember that "f(x)" is Sum and product of the roots of a quadratic equations … Take a look at this example. Graphing of linear functions needs to learn linear equations in two variables.. The only thing different is the function … different is the function notation. You first must be able to identify an ordered pair that is written in One meaning of linear function … More examples of linear equations Consider the following two examples: Example #1: I am thinking of a …
2021-09-17 20:26:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5243257284164429, "perplexity": 610.3417312823776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055775.1/warc/CC-MAIN-20210917181500-20210917211500-00509.warc.gz"}
https://www.tutorialspoint.com/how-do-you-copy-an-element-from-one-list-to-another-in-java
# How do you copy an element from one list to another in Java? JavaObject Oriented ProgrammingProgramming An element can be copied to another List using streams easily. Use Streams to copy selective elements. List<String> copyOfList = list.stream().filter(i -> i % 2 == 0).collect(Collectors.toList()); ## Example Following is the example to copy only even numbers from a list − package com.tutorialspoint; import java.util.Arrays; import java.util.List; import java.util.stream.Collectors; public class CollectionsDemo { public static void main(String[] args) { List<Integer> list = Arrays.asList(11, 22, 3, 48, 57); System.out.println("Source: " + list); List<Integer> evenNumberList = list.stream().filter(i -> i % 2 == 0).collect(Collectors.toList()); System.out.println("Even numbers in the list: " + evenNumberList); } } ## Output This will produce the following result − Source: [11, 22, 3, 48, 57] Even numbers in the list: [22, 48] Updated on 10-May-2022 07:52:50
2022-06-28 08:21:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24641278386116028, "perplexity": 10241.741184597235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103360935.27/warc/CC-MAIN-20220628081102-20220628111102-00101.warc.gz"}
https://www.physicsforums.com/threads/photons-and-mometum.715829/
# Photons and mometum 1. Oct 11, 2013 ### thatguy101 1. The problem statement, all variables and given/known data In space near earth, about 3.84*10^21 photons are incident per square meter. On average, the momentum of a photon is 1.3*10^-27. Assume we have a 1205 kg spaceship, and a square sail that is 26.3m wide. How fast could the ship be travelling after 21 days? 2. Relevant equations F*Δt=m*v 3. The attempt at a solution I started out by finding the force the photons would have on 1 m^2 and then multplied that by 26.3 to get the force on the sail $(1.30*10^-27)(*3.84*10^+21)*(26.3^2)$ and got .00345 N Since F=ma, I divided that by 1205 to get 2.865*10^-6 I then took 21 days and changed it to 1814400 s. So i multiplied 2.865*10^-6 by1814400 to get 5.19914 m/s. Did i go wrong somewhere cause the computer says im wrong. 2. Oct 11, 2013 ### TSny Hello. Is any information given about whether or not the photons are reflected back by the sail? 3. Oct 11, 2013 ### Andrew Mason Your method is right. It may be a matter of significant figures. Try to work out the solution using symbols and plug in numbers at the end. It makes it easier to follow: Fsail = dpship/dt= ndpph/dt = σAdpph/dt where σ is the number of photons per unit area
2017-08-20 01:26:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4352332651615143, "perplexity": 1159.0519809769328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105955.66/warc/CC-MAIN-20170819235943-20170820015943-00579.warc.gz"}
http://clay6.com/qa/720/find-the-principal-value-of-the-following-
# Find the principal value of the following: $cos^{-1} \bigg( -\frac {1} {\sqrt 2} \bigg)$ $\begin{array}{1 1} \frac{3\pi}{4} \\ -\frac{3\pi}{4} \\ \frac{\pi}{4} \\ - \frac{\pi}{4} \end{array}$ Ans : $$cos^{-1} \bigg( cos \bigg( \pi-\frac{\pi}{4} \bigg) \bigg) = \frac{3\pi}{4}$$ answered Feb 22, 2013 Toolbox: • The range of the principal value of $\cos^{-1}x$ is $\left [ 0,\pi \right ]$ • $\cos (\pi - x) = -cos\; x$ Let $\cos^{-1}-(\frac{1}{\sqrt 2}) = x$ $\Rightarrow \cos x = \frac{-1}{\sqrt 2}$ The range of the principal value of $\cos^{-1}x$ is $\left [ 0,\pi \right ]$ Therefore, $\cos x = \frac{-1}{\sqrt 2} = - \cos \frac{\pi}{4}$ Because $\cos (\pi - x) = -cos\; x$, $\cos x = cos (\pi - \frac{\pi}{4}) = \cos \frac{3\pi}{4}$ $\Rightarrow x = \frac{3 \pi}{4}$, where $x \;\epsilon \; \left [ 0,\pi \right ]$ Therefore, the principal value of $\cos^{-1} (\frac{-1}{\sqrt 2})$ is $\frac{3\pi}{4}$ answered Mar 2, 2013
2018-01-18 02:11:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9672715067863464, "perplexity": 748.665161350453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887054.15/warc/CC-MAIN-20180118012249-20180118032249-00036.warc.gz"}
https://www.scientificlib.com/en/Mathematics/LX/DarbouxVector.html
# . In differential geometry, especially the theory of space curves, the Darboux vector is the angular velocity vector of the Frenet frame of a space curve.[1] It is named after Gaston Darboux who discovered it.[2] It is also called angular momentum vector, because it is directly proportional to angular momentum. In terms of the Frenet-Serret apparatus, the Darboux vector ω can be expressed as[3] $$\boldsymbol{\omega} = \tau \mathbf{T} + \kappa \mathbf{B} \qquad \qquad$$ (1) and it has the following symmetrical properties:[2] $$\boldsymbol{\omega} \times \mathbf{T} = \mathbf{T'},$$ $$\boldsymbol{\omega} \times \mathbf{N} = \mathbf{N'},$$ $$\boldsymbol{\omega} \times \mathbf{B} = \mathbf{B'},$$ which can be derived from Equation (1) by means of the Frenet-Serret theorem (or vice versa). Let a rigid object move along a regular curve described parametrically by β(t). This object has its own intrinsic coordinate system. As the object moves along the curve, let its intrinsic coordinate system keep itself aligned with the curve's Frenet frame. As it does so, the object's motion will be described by two vectors: a translation vector, and a rotation vector ω, which is an areal velocity vector: the Darboux vector. Note that this rotation is kinematic, rather than physical, because usually when a rigid object moves freely in space its rotation is independent of its translation. The exception would be if the object's rotation is physically constrained to align itself with the object's translation, as is the case with the cart of a roller coaster. Consider the rigid object moving smoothly along the regular curve. Once the translation is "factored out", the object is seen to rotate the same way as its Frenet frame. The total rotation of the Frenet frame is the combination of the rotations of each of the three Frenet vectors: $$\boldsymbol{\omega} = \boldsymbol{\omega}_\mathbf{T} + \boldsymbol{\omega}_\mathbf{N} + \boldsymbol{\omega}_\mathbf{B}.$$ Each Frenet vector moves about an "origin" which is the centre of the rigid object (pick some point within the object and call it its centre). The areal velocity of the tangent vector is: $$\boldsymbol{\omega}_\mathbf{T} = \lim_{\Delta t \rightarrow 0} {\mathbf{T}(t) \times \mathbf{T}(t + \Delta t) \over 2 \, \Delta t}$$ $$= {\mathbf{T}(t) \times \mathbf{T'}(t) \over 2}.$$ Likewise, $$\boldsymbol{\omega}_\mathbf{N} = {1 \over 2} \ \mathbf{N}(t) \times \mathbf{N'}(t),$$ $$\boldsymbol{\omega}_\mathbf{B} = {1 \over 2} \ \mathbf{B}(t) \times \mathbf{B'}(t).$$ Now apply the Frenet-Serret theorem to find the areal velocity components: $$\boldsymbol{\omega}_\mathbf{T} = {1\over 2} \mathbf{T} \times \mathbf{T'} = {1\over 2}\kappa \mathbf{T} \times \mathbf{N} = {1\over 2}\kappa \mathbf{B}$$ $$\boldsymbol{\omega}_\mathbf{N} = {1\over 2}\mathbf{N} \times \mathbf{N'} = {1\over 2}(-\kappa \mathbf{N} \times \mathbf{T} + \tau \mathbf{N} \times \mathbf{B}) = {1\over 2}(\kappa \mathbf{B} + \tau \mathbf{T})$$ $$\boldsymbol{\omega}_\mathbf{B} = {1\over 2}\mathbf{B} \times \mathbf{B'} = -{1\over 2}\tau \mathbf{B} \times \mathbf{N} = {1\over 2}\tau \mathbf{T}$$ so that $$\boldsymbol{\omega} = {1\over 2}\kappa \mathbf{B} + {1\over 2}(\kappa \mathbf{B} + \tau \mathbf{T}) + {1\over 2}\tau \mathbf{T} = \kappa \mathbf{B} + \tau \mathbf{T},$$ as claimed. The Darboux vector provides a concise way of interpreting curvature κ and torsion τ geometrically: curvature is the measure of the rotation of the Frenet frame about the binormal unit vector, whereas torsion is the measure of the rotation of the Frenet frame about the tangent unit vector.[2] References Stoker, J. J. (2011), Differential Geometry, Pure and applied mathematics 20, John Wiley & Sons, p. 62, ISBN 9781118165478. Farouki, Rida T. (2008), Pythagorean-Hodograph Curves: Algebra and Geometry Inseparable, Geometry and Computing 1, Springer, p. 181, ISBN 9783540733980. Oprea, John (2007), Differential Geometry and Its Applications, Mathematical Association of America Textbooks, MAA, p. 21, ISBN 9780883857489. Mathematics Encyclopedia
2023-03-30 01:46:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7294633984565735, "perplexity": 643.1245476719513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949093.14/warc/CC-MAIN-20230330004340-20230330034340-00316.warc.gz"}
https://svn.geocomp.uq.edu.au/escript/trunk/doc/user/diffusion.tex?sortby=rev&r1=107&r2=121&pathrev=2363
# Diff of /trunk/doc/user/diffusion.tex revision 107 by jgs, Thu Jan 27 06:21:48 2005 UTC revision 121 by jgs, Fri May 6 04:26:16 2005 UTC # Line 1  Line 1 1  % $Id$  % $Id$ 2  \chapter{How to Solve A Diffusion Problem}  \section{The Diffusion Problem} 3  \label{DIFFUSION CHAP}  \label{DIFFUSION CHAP} 4 5  \begin{figure}  \begin{figure} # Line 8  Line 8 8  \label{DIFFUSION FIG 1}  \label{DIFFUSION FIG 1} 9  \end{figure}  \end{figure} 10 11  \section{\label{DIFFUSION OUT SEC}Outline}  \subsection{\label{DIFFUSION OUT SEC}Outline} 12  In this chapter we will discuss how to solve the time dependent-temperature diffusion\index{diffusion equation} for  In this chapter we will discuss how to solve the time dependent-temperature diffusion\index{diffusion equation} for 13  a block of material. Within the block there is a heat source which drives the temperature diffusion.  a block of material. Within the block there is a heat source which drives the temperature diffusion. 14  On the surface, energy can radiate into the surrounding environment.  On the surface, energy can radiate into the surrounding environment. # Line 22  The implementation of a Helmholtz equati Line 22  The implementation of a Helmholtz equati 22  In Section~\ref{DIFFUSION TRANS SEC} the solver of the Helmholtz equation is used to build a  In Section~\ref{DIFFUSION TRANS SEC} the solver of the Helmholtz equation is used to build a 23  solver for the temperature diffusion problem.  solver for the temperature diffusion problem. 24 25  \section{\label{DIFFUSION TEMP SEC}Temperature Diffusion}  \subsection{\label{DIFFUSION TEMP SEC}Temperature Diffusion} 26 27  The unknown temperature $T$ is a function of its location in the domain and time $t>0$. The governing equation  The unknown temperature $T$ is a function of its location in the domain and time $t>0$. The governing equation 28  in the interior of the domain is given by  in the interior of the domain is given by # Line 100  this forms a boundary value problem that Line 100  this forms a boundary value problem that 100  As a first step to implement a solver for the temperature diffusion problem we will  As a first step to implement a solver for the temperature diffusion problem we will 101  first implement a solver for the  boundary value problem that has to be solved at each time step.  first implement a solver for the  boundary value problem that has to be solved at each time step. 102 103  \section{\label{DIFFUSION HELM SEC}Helmholtz Problem}  \subsection{\label{DIFFUSION HELM SEC}Helmholtz Problem} 104  The partial differential equation to be solved for $T^{(n)}$ has the form  The partial differential equation to be solved for $T^{(n)}$ has the form 105 106  \omega u  - (\kappa u\hackscore{,i})\hackscore{,i} = f  \omega u  - (\kappa u\hackscore{,i})\hackscore{,i} = f # Line 177  class Helmholtz(LinearPDE): Line 177  class Helmholtz(LinearPDE): 177     def setValue(self,kappa=0,omega=1,f=0,eta=0,g=0)     def setValue(self,kappa=0,omega=1,f=0,eta=0,g=0) 178          ndim=self.getDim()          ndim=self.getDim() 179          kronecker=numarray.identity(ndim)          kronecker=numarray.identity(ndim) 180          self._setValue(A=kappa*kronecker,D=omega,Y=f,d=eta,y=g)          self._LinearPDE_setValue(A=kappa*kronecker,D=omega,Y=f,d=eta,y=g) 181  \end{python}  \end{python} 182  \code{class Helmholtz(linearPDE)} declares the new \class{Helmholtz} class as a subclass  \code{class Helmholtz(linearPDE)} declares the new \class{Helmholtz} class as a subclass 183  of \LinearPDE which we have imported in the first line of the script.  of \LinearPDE which we have imported in the first line of the script. # Line 185  We add the method \method{setValue} to t Line 185  We add the method \method{setValue} to t 185  \method{setValue} method of the \LinearPDE class. The new method which has the  \method{setValue} method of the \LinearPDE class. The new method which has the 186  parameters of the Helmholtz \eqn{DIFFUSION HELM EQ 1} as arguments  parameters of the Helmholtz \eqn{DIFFUSION HELM EQ 1} as arguments 187  maps the parameters of the coefficients of the general PDE defined  maps the parameters of the coefficients of the general PDE defined 188  in \eqn{EQU.FEM.1}. We are actually using the \method{_setValue} of  in \eqn{EQU.FEM.1}. We are actually using the \method{_LinearPDE__setValue} of 189  the \LinearPDE class (notice the leeding underscoure).  the \LinearPDE class. 190  The coefficient \var{A} involves the Kronecker symbol. We use the  The coefficient \var{A} involves the Kronecker symbol. We use the 191  \numarray function \function{identity} which returns a square matrix with ones on the  \numarray function \function{identity} which returns a square matrix with ones on the 192  main diagonal and zeros off the main diagonal. The argument of \function{identity} gives the order of the matrix.  main diagonal and zeros off the main diagonal. The argument of \function{identity} gives the order of the matrix. # Line 249  However most PDE packages use an iterati Line 249  However most PDE packages use an iterati 249  when a given tolerance has been reached. The default tolerance is $10^{-8}$. This value can be altered by using the  when a given tolerance has been reached. The default tolerance is $10^{-8}$. This value can be altered by using the 250  \method{setTolerance} of the \LinearPDE class.  \method{setTolerance} of the \LinearPDE class. 251 252  \section{The Transition Problem}  \subsection{The Transition Problem} 253  \label{DIFFUSION TRANS SEC}  \label{DIFFUSION TRANS SEC} 254  Now we are ready to solve the original time dependent problem. The main  Now we are ready to solve the original time dependent problem. The main 255  part of the script is the loop over time $t$ which takes the following form:  part of the script is the loop over time $t$ which takes the following form: # Line 359  dx -edit diffusion.net & Line 359  dx -edit diffusion.net & 359  \end{verbatim}  \end{verbatim} 360  where \file{diffusion.net} is an \OpenDX script available in the \ExampleDirectory.  where \file{diffusion.net} is an \OpenDX script available in the \ExampleDirectory. 361  Use the \texttt{Sequencer} to move forward and and backwards in time.  Use the \texttt{Sequencer} to move forward and and backwards in time. \fig{DIFFUSION FIG 2} shows the result for some selected time steps. 362    \fig{DIFFUSION FIG 2} shows the result for some selected time steps. Legend: Removed from v.107 changed lines Added in v.121
2019-10-16 10:23:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8273342847824097, "perplexity": 6695.645794332995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666959.47/warc/CC-MAIN-20191016090425-20191016113925-00189.warc.gz"}
http://darribas.org/quant_geog/
# Dictionary Source: Oxford English Dictionary ### Quantitative • “That is, or may be, measured or assessed with respect to or on the basis of quantity; that may be expressed in terms of quantity; quantifiable.” ### Geography • “The field of study concerned with the physical features of the earth and its atmosphere, and with human activity as it affects and is affected by these, including the distribution of populations and resources and political and economic activities; also as a subject of educational study or examination.” ### Fotheringham, Brunsdon, and Charlton (2000) “One or more of the following activities: • the analysis of numerical spatial data; • the development of spatial theory; • and the construction and testing of mathematical models of spatial processes" ### Murray (2010) “The collection of methods that are applied, or could/can be applied, by geographers and others to study spatial phenomena, issues and problems” # History • As practice, origin is very old and hard to date • As a movement, 1950s/60s/70s $$\rightarrow$$ Quantitative Revolution • Adoption of the scientific method in human geography • Focus on quantification and measurement • Strong association with particular methods: statistics, modeling… • Sprung out of a few epicenters (UW’s “space cadets”, Lund’s T. Hagerstrand, also related to Isard’s Regional Science) • 1980s/90s $$\rightarrow$$ Cultural turn in Human Geography • 1990s/00s $$\rightarrow$$ Spill over other disciplines (Economics, sociology, public health/policy…) • [My view] Nowadays $$\rightarrow$$ Back in fashion? Big Data revolution, IoT, computational power… # Murray (2010) (Spatial) methods that can be/have been applied to human and physical geography problems and issues. Broad categories: • Geographic Information Systems (GISs) • Airborne sensing • Statistics and exploratory spatial data analysis (ESDA) • Mathematics and optimization • Regional analysis • Computer science and simulation # Geographic Information Systems (GISs) “Collection of hardware, software, and associated procedures to support spatial data • acquisition, • management, • manipulation, • analysis, • and display" Let’s walk through each of them with an example…
2020-02-16 19:51:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23531782627105713, "perplexity": 8950.438451253403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141396.22/warc/CC-MAIN-20200216182139-20200216212139-00445.warc.gz"}
https://tex.stackexchange.com/questions/228208/generating-a-3x3-payoff-matrix-game-theory
# Generating a 3x3 payoff matrix (Game-Theory) Been working on some economics lately, and was able to figure out how to put together some 2x2 payoff matrix's but not sure how to go about doing a 3x3. I would like to keep the same general idea if possible, i.e. the tikz picture. Here is what I have used to generate a 2x2. %third matrix %learn how to do 3x3 \begin{center} \begin{tikzpicture} \matrix[matrix of math nodes,every odd row/.style={align=right},every evenrow/.style={align=left},every node/.style={text width=1.5cm},row sep=0.2cm,column sep=0.2cm] (m) { 2&\bf{3}\\ 4&\bf{2}\\ -1&\bf{0}\\ \bf{6}&0\\ }; \draw (m.north east) rectangle (m.south west); \draw (m.north) -- (m.south); \draw (m.east) -- (m.west); \coordinate (a) at ($(m.north west)!0.25!(m.north east)$); \coordinate (b) at ($(m.north west)!0.75!(m.north east)$); \node[above=5pt of a,anchor=base] {Left}; \node[above=5pt of b,anchor=base] {Right}; \coordinate (c) at ($(m.north west)!0.25!(m.south west)$); \coordinate (d) at ($(m.north west)!0.75!(m.south west)$); \node[left=2pt of c,text width=1cm] {Up}; \node[left=2pt of d,text width=1cm] {Down}; \node[above=18pt of m.north] (firm b) {Column}; \node[left=1.6cm of m.west,align=center,anchor=center] {Row}; %\node[above=5pt of firm b] {Payoff Matrix}; \end{tikzpicture} \end{center} • The question is....? – user11232 Feb 15 '15 at 7:56 • @HarishKumar: I think the OP wants to know how to modify his code for a 3 by 3 matrix. – Shahab Feb 15 '15 at 9:33 • @Shahab And how does a 3 x 3 matrix look like? I don't know game theory! – user11232 Feb 15 '15 at 9:35 • @HarishKumar: Basically 9 boxes instead of 4. – Shahab Feb 15 '15 at 9:36 • @Shahab hehe. I think I know that much ;-) – user11232 Feb 15 '15 at 9:44 Please have a look at Will two-letter font style commands (\bf , \it , …) ever be resurrected in LaTeX? \documentclass[tikz]{standalone} \usetikzlibrary{matrix,calc,positioning} \begin{document} \begin{tikzpicture} \matrix[matrix of math nodes,draw, every odd row/.style={align=right},every evenrow/.style={align=left}, nodes={text width=1.5cm},row sep=0.2cm,column sep=0.2cm] (m) {2&3&6\\4&2&-1\\-1&0&0\\0&0&0\\2&3&6\\4&2&-1\\}; \foreach\x[count=\xi from 2,evaluate={\xx=int(2*\x);\xxi=int(\xx+1)}] in {1,2}{ \draw ({$(m-1-\x)!0.5!(m-1-\xi)$}|-m.north) -- ({$(m-1-\x)!0.5!(m-1-\xi)$}|-m.south); \draw ({$(m-\xx-1)!0.5!(m-\xxi-1)$}-|m.west) -- ({$(m-\xx-1)!0.5!(m-\xxi-1)$}-|m.east); } \foreach\x in{0,1,2}{ \node[text depth=0.25ex,above=2mm] at ($(m.north west)!{(2*\x+1)/6}!(m.north east)$) \node[left=2mm] at ($(m.north west)!{(2*\x+1)/6}!(m.south west)$) • Don't you need .append style instead of .style to make keep the node entries math. The OP had some entries \bf, that ought to be \mathbf. – Andrew Swann Feb 15 '15 at 12:41 • @AndrewSwann Oops. I didn't even pay attention. Fixed now via nodes. Thanks. I'll leave the node ornaments to OP though. – percusse Feb 15 '15 at 12:49
2019-10-21 05:19:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.862221896648407, "perplexity": 8028.5885666682325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987756350.80/warc/CC-MAIN-20191021043233-20191021070733-00279.warc.gz"}
https://math.stackexchange.com/questions/334382/need-to-prove-the-sequence-a-n-1-frac1nn-converges-by-proving-it-is-a
Need to prove the sequence $a_n=(1+\frac{1}{n})^n$ converges by proving it is a Cauchy sequence I am trying to prove that the sequence $a_n=(1+\frac{1}{n})^n$ converges by proving that it is a Cauchy sequence. I don't get very far, see: for $\epsilon>0$ there must exist $N$ such that $|a_m-a_n|<\epsilon$, for $m,n>N$ $$|a_m-a_n|=\bigg|\bigg(1+\frac{1}{m}\bigg)^m-\bigg(1+\frac{1}{n}\bigg)^n\bigg|\leq \bigg|\bigg(1+\frac{1}{m}\bigg)^m\bigg|+\bigg|\bigg(1+\frac{1}{n}\bigg)^n\bigg|\leq\bigg(1+\frac{1}{m}\bigg)^m+\bigg(1+\frac{1}{n}\bigg)^n\leq \quad?$$ I know I am supposed to keep going, but I just can't figure out the next step. Can anyone offer me a hint please? Or if there is another question that has been answered (I couldn't find any) I would gladly look at it. Thanks so much! • math.stackexchange.com/questions/64860/… – user940 Mar 19 '13 at 2:10 • @ByronSchmuland I looked at the link, but can you explain what the AM-GM inequality is, and what $\prod_{i=0}^n \, x_i .$ mean? Thanks! – user66807 Mar 19 '13 at 2:30 • @ Sebastian Griotberg (I don't know if it matters or not, but this isn't a homework question. Its for preparation for a test) – user66807 Mar 19 '13 at 3:27 We have the following inequalities: $\left(1+\dfrac{1}{n}\right)^n = 1 + 1 + \dfrac{1}{2!}\left(1-\dfrac{1}{n}\right)+\dfrac{1}{3!}\left(1-\dfrac{1}{n}\right)\left(1-\dfrac{2}{n}\right) + \ldots \leq 2 + \dfrac{1}{2} + \dfrac{1}{2^2} + \dots =3$ Similarly, \begin{align*} \left(1-\dfrac{1}{n^2}\right)^n &= 1 - {n \choose 1}\frac{1}{n^2} + {n \choose 2}\frac{1}{n^4} + \dots\\ &= 1 - \frac{1}{n} + \dfrac{1}{2!n^2}\left(1-\dfrac{1}{n}\right) - \dfrac{1}{3!n^3}\left(1-\dfrac{1}{n}\right)\left(1-\dfrac{2}{n}\right) + \ldots \end{align*} So, $$| \left(1-\dfrac{1}{n^2}\right)^{n} -\left(1-\frac{1}{n} \right)| \leq \dfrac{1}{2n^2} + \dfrac{1}{2^2n^2} + \dfrac{1}{2^3n^2} + \ldots = \dfrac{1}{n^2} .$$ Now, \begin{align*} \left(1+\frac{1}{n+1}\right)^{n+1} - \left(1+\frac{1}{n}\right)^n &= \left(1+\frac{1}{n}-\frac{1}{n(n+1)}\right)^{n+1}-\left(1+\frac{1}{n}\right)^n \\ &=\left(1+\frac{1}{n}\right)^{n+1}\left\{ \left( 1- \frac{1}{(n+1)^2}\right)^{n+1} - \frac{n}{n+1}\right\}\\ &= \left(1+\frac{1}{n}\right)^{n+1}(1 - \frac{1}{n+1} + \text{O}(\frac{1}{n^2}) - \frac{n}{n+1})\\ & = \left(1+\frac{1}{n}\right)^{n+1}\text{O}(\frac{1}{n^2}) \\ &= \text{O}(\frac{1}{n^2}) \text{ (since(1+1/n)^{n+1}is bounded) }. \end{align*} So, letting $a_k = (1+1/k)^k$ we have, $|a_{k+1}-a_k| \leq C/k^2$ for some $C$ and hence, $\sum_{ k \geq n } | a_{k+1} - a_k | \to 0$ as $n \to \infty$. Since $|a_n - a_m| \leq \sum_{ k \geq \min\{m,n\}} |a_{k+1} - a_k|$. So given $\epsilon > 0$ chose $N$ such that $\sum_{ k \geq N} |a_k - a_{k+1}| < \epsilon$ and $|a_n - a_m| < \epsilon$ for $n,m \geq N$. Look if it works! For $m>n \geq 1, |a_m-a_n| \leq |a_m^{1/m}-a_n^{1/n}|=|\frac {1}{m}-\frac {1}{n}|<\frac {1}{m}+\frac {1}{n}<\frac {2}{n}< \epsilon$ if $n >[2 /\epsilon].$ • That idea came to me, but I didn't think of using it at the beginning. Thanks so much, very helpful. – user66807 Mar 19 '13 at 3:47 • @learner, how do you prove the very first inequality from the left? – DonAntonio Mar 19 '13 at 3:51 • @DonAntonio That's a good question that I hadn't thought of. I don't know how to prove it, but plugging in values where $m>n\geq1$ works, but proving it would be helpful as well. – user66807 Mar 19 '13 at 4:12 • Try plugging in $n=5$ and $m=7$. – user940 Mar 19 '13 at 15:34 • @ByronSchmuland You are right. When plugging in those values the inequality doesn't hold...unless of course I didn't calculate properly. – user66807 Mar 29 '13 at 21:29
2019-12-13 10:47:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9987413287162781, "perplexity": 538.1607651384941}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540553486.23/warc/CC-MAIN-20191213094833-20191213122833-00439.warc.gz"}
https://www.physicsforums.com/threads/question-about-parity.282954/
# Question about parity 1. Jan 5, 2009 ### Bill Foster I'm reading a book: B.R. Martin's Nuclear and Particle Physics - An Introduction. In section 1.3.1 on Parity, it states the following: When dealing with p·x in the exponent, which should p and x be treated as - vectors or operators? Suppose I work it out. If exp$$[i($$p·x$$-Et)]$$ is an eigenfunction of momentum, then I assume that means that: $$\hat{p}\psi($$x,t)=$$-i\hbar\frac{\partial}{\partial x}$$exp$$[i($$p·x$$-Et)]$$ Now if p and x are treated as vectors, then I will have to rewrite p·x as pxcosθ, which would lead to $$\hbar p \cos{\theta}$$exp$$[i($$p·x$$-Et)]$$, right? Then the author goes on to say I don't really understand that. Because if I did the above math correctly, then if p = 0, then $$\hbar p \cos{\theta}$$exp$$[i($$p·x$$-Et)]$$ would also be 0. Can someone clear this up for me? Thanks. Last edited: Jan 5, 2009 2. Jan 5, 2009 ### CompuChip Sorry, don't know that one. They should be vectors, because you are taking the inner product. Actually that doesn't say much. Basically you just wrote out what $\hat p$ looks like in "real"-space. Being an eigenfunction means that $$\hat p \psi(x, t) = p \psi(x, t)$$ where the p on the right hand side is just a number. If you want to write it in real space, then you get $$-i \hbar \frac{\partial}{\partial x} e^{i \vec p \cdot \vec x - E t} = P e^{i \vec p \cdot \vec x - E t}$$ I have called the eigenvalue P here, because from a mathematical point of view, it is just a number which needn't have anything to do with the momentum vector $\vec p$. In linear algebra, you would conventionally use $\lambda$ instead of P. That is a possibility, where you go to polar coordinates in which you express a vector $\vec p$ in terms of a length $p$ and two angles $\theta, \phi$ and you choose your axes such that one of the variables $\theta$ is precisely the angle between $\vec p$ and $\vec x$ (that is, you make $\vec p$ lie along the z-axis). You did the math correctly. Note that if $\vec p = \vec 0$, then in your polar coordinates in which p is the radial coordinate, also $p = 0$. So even after multiplying by $\hbar \cos\theta e^\cdots$ it will be zero. That entire exercise was just to show that indeed the exponential is an eigenfunction of the momentum operator, and you can read of the eigenvalue (it's the lambda in $\hat p \psi = \lambda(p) \psi$). I think the point the author is trying to make is, that if the momentum vector is zero, there is no dependence on $\vec x$ anymore. So if you flip all the spatial coordinates $\vec x \to -\vec x$ (meaning x to -x, y to -y, z to -z, t to t) then the eigenfunction won't change. Hope that lifts some confusion?
2016-10-25 14:17:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.904028594493866, "perplexity": 341.00318887078305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720153.61/warc/CC-MAIN-20161020183840-00524-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.allaboutcircuits.com/tools/rf-power-ratio-conversion-calculator/
# RF Power Ratio Conversion Calculator ## This tool is designed to convert power ratios to decibel (dB) ### Overview Power ratios are presented in decibels to avoid handling very small or very large values. This calculator will help you convert power ratios to decibels. Take note that you can choose between milliwatts or watts for both the input and output power. ### Equation $$Ratio_{dB} = 10 log(\frac{P_{out}}{P_{in}})$$ Where: $$P_{out}$$ = output power in watts or milliwatts $$P_{in}$$ = input power in watts or milliwatts ### Applications Power ratios are common in electronics engineering, especially when handling amplifiers and other active devices. An amplifier's gain is the ratio of the power at its output to the power at its input. Naturally in an amplifier, the output power is higher than the input power so the ratio is greater than one. In decibels, this would be a positive number. If the ratio is less than one, it is considered a loss. A loss in decibels is a negative value. Losses are generally undesirable in RF technology but there are instances where a loss is inserted in the circuit to "attenuate" power. Decibels are handy in presenting very small or very large ratios. For example, the ratio 100000000/100 would be presented as 1000000. In decibels, this would be just 60 dB. Another example is the ratio 0.0001 which is equal to -4 dB. If there are two or more amplifiers connected in cascade, you can compute their overall gain. If the gains are presented in ratio, the overall gain can be calculated by multiplying the individual gains of the amplifiers. If the gains are presented in decibels, the overall gain is the sum of the individual gains. This is true for calculating the total losses in a circuit. This makes calculations easier and is one of the reason why ratios are often presented in decibels in electronics engineering.
2020-09-19 10:04:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7427617311477661, "perplexity": 592.0716516340314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400191160.14/warc/CC-MAIN-20200919075646-20200919105646-00390.warc.gz"}
https://brilliant.org/problems/the-rolling-starts-now/
# The rolling starts now... A uniform wheel(disc) of radius $$R$$ was set rotating about its axis at an angular speed $$\omega$$. Now it is carefully placed on a rough horizontal surface of friction coefficient $$\mu$$ with its axis horizontal. Because of friction the wheel accelerates forward and its rotation decelerates till the wheel starts pure rolling. If the time after which pure rolling starts on the surface is given by $$\frac { a }{ b } \frac { R\omega }{ \mu g }$$ then find $$a+b$$. ×
2017-01-19 21:29:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9642285704612732, "perplexity": 503.9188649846158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00460-ip-10-171-10-70.ec2.internal.warc.gz"}
https://ptrrupprecht.wordpress.com/2017/05/09/whole-cell-patch-clamp-part-3-limitations-of-quantitative-whole-cell-voltage-clamp/comment-page-1/
## Whole-cell patch clamp, part 3: Limitations of quantitative whole-cell voltage clamp Before I first dived into experimental neuroscience, I imagined whole-cell voltage clamp recordings to be the holy grail of precision. Directly listening to the currents that take place inside of a living neuron! How beautiful and precise, compared to poor-resolution techniques like fMRI or even calcium imaging! I somehow thought that activation curves and kinetics as recorded by Hodgkin and Huxley could be easily measured using voltage clamp, without introducing any errors. Coming partially from a modeling background, I was especially attracted by the prospect of measuring both inhibitory and excitatory inputs (IPSCs and EPSCs) that would allow me to afterwards combine them in a conductance-based model of a single neuron (or even a network of such neurons). Here I will write about the reasons why I changed my mind about the usefulness of such modeling efforts, with a focus on whole-cell recordings of the small (5-8 μm diameter) zebrafish neurons that I have been recording from during the last year. Let’s have a look at the typical components (=variables) of the conductance-based neuron model. $C_m dV(t)/dt = g_0 (V_0 - V(t)) + g_I(t) (V_I - V(t)) + g_E(t) (V_E - V(t))$ Measured quantities: Membrane capacity $C_m$, resting conductivity (inverse of resting membrane resistance $R_m$) $g_0$, reversal potentials for inhibitory and excitatory conductances, $V_I$ and $V_E$. From the currents measured with the voltage clamped to $V_I$ and $V_E$, respectively, the conductivity changes over time $g_E(t)$ and $g_I(t)$ can be inferred. Alltogether, this results in the trajectory for $V(t)$, the time course of the membrane potential. Then, the spiking threshold $V_{thresh}$ would allow to see from the simulated membrane potential when action potentials occur. Unfortunately, a simple order of magnitude estimate of the parameters is not good enough to make an informative model. Therefore I will in the following try to understand: When measuring these variables, how precise are these measurements and why? First of all, it took me a long time to understand that there is a big difference between the famous voltage-clamp experiments performed by Hodgkin and Huxley and those done in the whole-cell configuration. H&H inserted the wire of the recording electrode into the giant axon (picture to the left, taken from Hodkin, Huxley and Katz, 1952). In this configuration, there is basically no resistance between electrode and cytoplasm, because the electrode is inside of the cell. In whole-cell recordings, however, the electrode wire is somewhere inside of the glass pipette (picture on the right side). The glass pipette is connected to the cell at a specific location via a tiny opening that allows voltages to more or less equilibrate between cytoplasm and pipette/electrode. This is the first problem: 1. Series resistance. Series resistance $R_s$ is the electrical resistance between cell and pipette, caused by the small pipette neck and additional dirt that clogs the opening (like cell organelles…). The best and easiest-to-understand summary of series resistance effects that I found has been written by Bill Connelly (thanks to Labrigger for highlighting this blog). Series resistance makes voltage clamp recordings bad for two reasons: First, the signals are lowpass-filtered with a time constant given by $R_s*C_m$. Second, the resistance prevents the voltage in the cell from adapting to the voltage applied in the micropipette. Depending on the ratio $R_s/R_m$, the clamped voltage is more or less systematically wrong. 2. Space clamp. There is only one location which is properly voltage-clamped in the whole-cell mode, and this is the pipette itself. The cell body voltage is different from the pipette potential because of the series resistance (Ohm’s law). The voltage clamp in the dendrites is even more impaired by electrical resistance between the dendrites and the soma. Therefore, voltage clamp at a membrane potential close to the ‘resting potential’ (-70 mV … -50 mV) is more or less reliable; whereas voltage clamp for recording of inhibitory currents (0 mV … +20 mV) is less reliable for the dendritic parts, especially if the dendrites are small. To make things worse, the resistance between soma and dendrites is not necessarily constant over time. Imagine a case where inhibitory inputs open channels at the proximal dendrite. In this case, the electric connection between soma and the distal end of the dendrite will be impaired, and voltage clamp will be worsened as well. Notably, this worsening of voltage clamp would be tightly correlated to the strength of input currents. In neurons that are large enough to record with patch pipettes both from soma and dendrites, one can test the space clamp error experimentally. And it is not small. If there are active conductances involved, the complexity of the situation increases even further. In a 1992 paper on series resistance and space clamp in whole-cell recordings, Armstrong and Gilly conclude with the following matter-of-fact paragraph: “We would like to end with a message of hope regarding the significance of current measurements in extended structures, but we cannot. Interpretation of voltage clamp data where adequate ‘space clamping’ is impossible is extremely difficult unless the membrane is passive and uniform in its properties and the geometry is simple. (…) 3. Seal resistance. The neuron’s membrane potential deviates from the pipettes potential by a factor that depends on series resistance. Both can be measured and used to calculate the true membrane potential. But there is a second confounding resistance, the seal resistance. Usually, it is neglected, because everything is fine if the series resistance remains constant over the duration of the experiment. But if one needs absolute and not only relative measurements of membrane resistance, firing threshold etc., seal resistance needs to be considered, especially in very small cells. In a good patch, the seal resistance is around 10 GΩ or more. But sometimes it is also a little bit less, maybe 5-8 GΩ. For small neurons, the membrane resistance can be in the same order of magnitude, for example 2-3 GΩ (and yes, I’m dealing with that kind of neurons myself). Seal resistance and membrane resistance therefore divide the applied voltage, leading to voltage errors and wrong measurements of membrane resistance (see also this study). With $R_m = 2$ GΩ and $R_seal = 8$ GΩ, this would lead to a false measurement of $R_m = 1.6$ GΩ. This error is not random, but systematic. Again, this could be corrected for by measuring seal resistance before break-in and calculating the true membrane resistance afterwards. But it is – in my opinion – unlikely that the seal resistance remains unchanged during such a brutal act as a break-in. The local membrane tension around the seal becomes inhomogeneous after the break-in, and it sounds likely to me that this process creates small leaks that were sealed by dirt during the attached configuration. This is an uncertainty which can not be quantified (to my best knowledge) and which makes quantitative measurements of $R_m$ and the true membrane potential in small cells very difficult. 4. The true membrane potential. The recorded membrane potential (say, the ‘resting membrane potential’, or the spiking threshold) is not necessarily the true membrane potential. First, series resistance introduces a systematic error – this is ok, it can be understood and accounted for. It is more difficult to correct for the errors induced by seal resistance, as mentioned. One way to avoid leaks due to break-in is perforated-patch recordings, which is however rather difficult, and probably impossible to combine with small pipette tips that are required for the small neurons I’m interested in. In addition, I always asked myself how well does my internal solution match the cytoplasm of my neurons in terms of relevant properties. Of course there are differences. But do these differences affect the membrane potential? I don’t see how this could be found out. 5. Correlation of inhibitory and excitatory currents. During voltage-clamp, one can measure either inhibitory or excitatory currents, but not both at the same time. If everything was 100% reproducible, repeating the measurements in separate trials would be totally ok, but this is not the case. Instead, fluctuations of inhibitory and excitatory currents are typically correlated, although it is unclear to which degree. A way to navigate around this problem is to ignore it and simply work with averages over trials (as I did in the simulation at the beginning of my post). Another solution is to use highly reproducible and easy-to-time stimuli (electrical or optogenetic stimuli, acoustic stimuli) that lead to highly reproducible event-evoked currents. However, this also cannot help understand trial-to-trial-co-variability of excitation and inhibition and similar aspects that take place on fast timescales. There are studies that patch two neighbouring neurons at the same time, measuring excitatory currents from the one and inhibitory from the other neuron, but this is not exactly what one wants to have. There is actually a lack of a techniques that would allow to observe inhibitory and excitatory currents in the same neuron during the same trial, and this lack is creating a lot of uncertainty about how neurons and neuronal networks operate in reality. All in all, this does not sound like good news for whole cell voltage clamp experiments. One problem I’m particularly frustrated about is that for many of these systematic errors there is no ground truth available, and it is totally unclear how large the errors actually are or how this could be found out. However, for many problems, whole-cell patch clamp is still the only available technique, despite all its traps and uncertainties. I’d like to end with the final note of the previously cited paper by Armstrong and Gilly: “[A]nd the burden of proof that results are interpretable must be on the investigator.” At this point, a big thank you goes to Katharina Behr from whom I learned quite some of what I wrote down in this blog post. And of course any suggestions on what I am missing here are welcome! This entry was posted in Data analysis, electrophysiology, Neuronal activity, zebrafish and tagged , , . Bookmark the permalink. ### 3 Responses to Whole-cell patch clamp, part 3: Limitations of quantitative whole-cell voltage clamp 1. John Tuthill says: Very nice discussion. For an example of simultaneous measurement of excitatory and inhibitory currents, see Cafaro and Rieke (2010). Not perfect, but instructive nonetheless. • Hi John — interesting paper, I didn’t know it before. Funnily, I had tried this method (rapidly switching the holding potential between V_I and V_E) some time ago, but active conductances made any switching event something very strange and dominated by artifacts – maybe that was less of an issue for the retinal ganglion cells used in this study. And for input dynamics / noise correlations on a timescale < 10 ms, the method does not work anyway, unfortunately. But interesting nonetheless …
2017-05-29 17:07:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 19, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5869083404541016, "perplexity": 1523.3520907804182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612502.45/warc/CC-MAIN-20170529165246-20170529185246-00067.warc.gz"}
https://homework.study.com/explanation/find-a-polar-equation-for-the-curve-represented-by-the-given-cartesian-equation-a-9x-y-6-write-the-answer-in-the-form-r-f-t-where-t-stands-for-theta-b-x-2-y-2-5-write-the-an.html
# Find a polar equation for the curve represented by the given Cartesian equation: \\ (a) 9x + y =... ## Question: Find a polar equation for the curve represented by the given Cartesian equation: (a) {eq}9x + y = 6 {/eq} Write the answer in the form {eq}r = f(t) {/eq}, where {eq}t {/eq} stands for {eq}\theta {/eq}. (b) {eq}x^2 - y^2 = 5 {/eq} Write the answer in the form {eq}r^2 = f(t) {/eq}, where {eq}t {/eq} stands for {eq}\theta {/eq}. ## Finding the Polar Equivalent of an Equation for a Curve: To transform an equation of a curve into its polar equivalent, we write the equation in terms of the variables {eq}r {/eq} and {eq}\theta {/eq} using the equations {eq}x = r \cos \theta {/eq} and {eq}y = r \cos \theta {/eq} and possibly even using trigonometric identities to simplify the resulting equation. (a) \begin{align*} 9x + y &=...
2022-11-28 04:06:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9992436170578003, "perplexity": 2066.9451791922534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710473.38/warc/CC-MAIN-20221128034307-20221128064307-00123.warc.gz"}
https://www.freemathhelp.com/forum/threads/110038-Simple-module?p=423800
# Thread: Simple module: Let R be a ring with 1 and F a family of simple left R modules. 1. ## Simple module: Let R be a ring with 1 and F a family of simple left R modules. Let $R$ be a ring with $1$ and $\mathcal{F}$ a family of simple left $R$ modules. Let $M=\oplus_{S\in \mathcal{F}} S$ and suppose that $T$ is a simple submodule of $M$. Show that $T\cong S$ for some $S\in \mathcal{F}$. Thanks 2. Originally Posted by mona123 Let $R$ be a ring with $1$ and $\mathcal{F}$ a family of simple left $R$ modules. Let $M=\oplus_{S\in \mathcal{F}} S$ and suppose that $T$ is a simple submodule of $M$. Show that $T\cong S$ for some $S\in \mathcal{F}$. Thanks Please share your work with us ...even if you know it is wrong. If you are stuck at the beginning tell us and we'll start with the definitions. http://www.freemathhelp.com/forum/announcement.php?f=33
2018-08-19 11:08:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7667805552482605, "perplexity": 171.61990481793885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215077.71/warc/CC-MAIN-20180819110157-20180819130157-00396.warc.gz"}
https://nrich.maths.org/4561/note
### Worms Place this "worm" on the 100 square and find the total of the four squares it covers. Keeping its head in the same place, what other totals can you make? ### Which Scripts? There are six numbers written in five different scripts. Can you sort out which is which? ### Highest and Lowest Put operations signs between the numbers 3 4 5 6 to make the highest possible number and lowest possible number. ##### Age 7 to 11Challenge Level You may like to try our Fractional Wall problem before this one. Using the fraction wall above, can you say which is bigger, $\frac{1}{3}$ or $\frac{2}{8}$? By how much? Which is smaller, $\frac{5}{6}$ or $\frac{3}{4}$? By how much? What is the difference between $\frac{5}{6}$ and $\frac{1}{3}$? What is three quarters of $\frac{2}{3}$? Can you explain how you worked this out? Having a visual representation of fractions as a wall will aid children's understanding of both equivalent fractions and comparisons of fractions, but perhaps it will also equip them with a method to help them in the future. It is important that pupils have an appreciation of what is being taken as "the whole" and this may need some discussion before the problem as it stands is tackled. Using real Cuisenaire rods to aid manipulation of fractions is invaluable, although the lengths pose some restrictions. However, OHT rods could be used as a way into the problem by concentrating on just a few different lengths to start with. This could be used to help older children to understand the concepts necessary for manipulating fractions to add and subtract them. It is amazing how much easier the visualisation makes the arithmetic process. Just try it yourself and see.
2021-11-30 23:57:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5345688462257385, "perplexity": 742.3337935866273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359082.76/warc/CC-MAIN-20211130232232-20211201022232-00259.warc.gz"}
https://lifenujecatefo.hotseattleseahawksjerseys.com/theorems-concerning-probability-book-27205xf.php
Last edited by Tezil Sunday, May 3, 2020 | History 2 edition of Theorems concerning probability ... found in the catalog. Theorems concerning probability ... William Dowell Baten # Theorems concerning probability ... Written in English Subjects: • Probabilities • Edition Notes The Physical Object ID Numbers Statement by William Dowell Baten. Pagination 2 p. l., 48 p. Number of Pages 48 Open Library OL14766991M Theory of Probability & Its Applications , Abstract | PDF ( KB) () Invariance principles for some FARIMA and nonstationary Cited by: This book is intended as an elementary introduction to the theory of probability for students in mathematics, statistics, engineering, and the sciences (including com- puter science, biology, the social sciences, and management science) who possess the. Probability: The Classical Limit Theorems The theory of probability has been extraordinarily successful at describing a variety of natural phenomena, from the behavior of gases to the transmission of information, and is a powerful tool with applications throughout mathematics. At its heart are a numberFile Size: KB. You might also like Snow wolf Snow wolf Force limited vibration testing monograph Force limited vibration testing monograph Renewable Energy Research and Development Workshop Renewable Energy Research and Development Workshop Giants at the end of the world Giants at the end of the world The analysis of domestic workers protection policy The analysis of domestic workers protection policy Writing creative nonfiction Writing creative nonfiction Metaphors in matter Metaphors in matter Forty years of service Forty years of service introduction to equity. introduction to equity. The Getaway Man The Getaway Man Key thoughts to live by Key thoughts to live by Faces of love Faces of love ### Theorems concerning probability ... by William Dowell Baten Download PDF EPUB FB2 Theorems concerning probability. Theorems concerning probability. book Copifyed by the Copifyer Corp., ] (OCoLC) Material Type: Thesis/dissertation: Theorems concerning probability. book Type: Book: All Authors /. Existence Theorems in Probability Theory Sergio Fajardo and H. Jerome Keisler Universidad de Los Andes and Universidad Nacional, Bogot´a, Colombia. [email protected] University of Wisconsin, Madison WI [email protected] 0. Introduction: Existence and Compactness 1. Preliminaries 2. Neocompact Sets 3. General Neocompact. Probability, Statistics, and Mathematics: Papers in Honor of Samuel Karlin is a collection of papers dealing with probability, statistics, and mathematics. Conceived in honor of Polish-born mathematician Samuel Karlin, the book covers a wide array of topics, from the second-order moments of a stationary Markov chain to the exponentiality of the. 2 Convergence Theorems Basic Theorems 1. Relationships between convergence: (a) Converge a.c.)converge in probability)weak convergence. (b) Converge in Lp)converge in Lq)converge in probability) converge weakly, p q 1. (c) Convergence in KL divergence)Convergence in total variation)strong convergence of measure)weak convergence, where i. nFile Size: KB. Pages in category "Probability theorems" The following pages are in this category, out of total. This list may not reflect recent changes (). Mathematical Theory of Probability and Statistics focuses on the contributions and influence of Richard von Mises on the processes, methodologies, and approaches involved in the mathematical theory of probability and statistics. The publication first elaborates on fundamentals, general label space, and basic properties of distributions. Mathematics of Chance utilizes Theorems concerning probability. book, real-world problems-some of which have only recently been solved-to explain fundamental probability theorems, methods, and statistical reasoning. Jiri Andel begins with a basic introduction to probability theory and its important points before moving on to more specific sections on vital aspects of. Probability theory is an actively developing branch of mathematics. It has applications in many areas of science and technology and forms the basis of mathematical statistics. This self-contained, comprehensive book tackles the principal problems and advanced questions of probability theory and random processes in 22 chapters, presented in a. The Best Books to Learn Probability here is the ility theory is the mathematical study of uncertainty. It plays a central role in machine learning, as the design of learning algorithms often relies on probabilistic assumption of the. Theorems And Conditional Probability 1. Elementary Theoremsand Conditional Probability 2. Theorem 1,2Generalization of third axiom of probabilityTheorem 1: If A1, A2.,Anare mutually exclusive events in a sample space, thenP(A1 A2. An) = P(A1) + P(A2) + + P(An).Rule for Theorems concerning probability. book probability of an eventTheorem 2: If A is an event in the. This book offers a superb overview of limit theorems and probability inequalities for sums of independent random variables. Unique in its combination of both Theorems concerning probability. book and recent results, the book details the many practical aspects of these important tools for solving a great variety of Theorems concerning probability. book by: We'll work through five Theorems concerning probability. book in all, in each case first stating the theorem and then proving it. Then, once we've added the five theorems to our probability tool box, we'll close this lesson by applying the theorems to a few examples. Theorem #1. P(A) = 1 − P(A'). Proof of Theorem #1. Theorem #2. Probability Study Tips. If you’re going to take a probability exam, you can better your chances of acing the test by studying the following topics. Theorems concerning probability. book have a high probability of being on the exam. The relationship between mutually exclusive and independent events. Identifying when a probability is a conditional probability in a word problem. This book offers a superb overview of limit theorems and probability inequalities for sums of independent random variables. Unique in its combination of both classic and recent results, the book details the many practical aspects of these important tools for solving a great variety of. Probability theory is the branch of mathematics concerned with gh there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of lly these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed. This book is an introduction to probability theory covering laws of large numbers, central limit theorems, random walks, martingales, Markov chains, ergodic theorems, and Brownian motion. It is a comprehensive treatment concentrating on the results that are the most useful for applications. Its philosophy is that the best way to learn probability is to see it in action, so there are The theorem states that the probability of the simultaneous occurrence of two events that are independent is given by the product of their individual probabilities. ${P(A\ and\ B) = P(A) \times P(B) \\[7pt] P (AB) = P(A) \times P(B)}$ The theorem can he extended to three or more independent events. For convenience, we assume that there are two events, however, the results can be easily generalised. The probability of the compound event would depend upon whether the events are independent or not. Thus, we shall discuss two theorems; (a) Conditional Probability Theorem, and (b) Multiplicative Theorem for Independent Events. In his recent book on brownian motion [4, pp. ] P. Levy quotes a result of Dvoretzky and Erdos [3, Theorem 5] concerning brownian motion in « dimensions. Books shelved as probability-theory: An Introduction to Probability Theory and Its Applications, Volume 1 by William Feller, Probability and Measure by P. Limit Theorems for Stochastic Processes 2nd Edition Stochastic Integration and Differential Equations: A New Approach (Stochastic Modelling and Applied Probability Book 21) Philip Protter. out of 5 stars 6. semimartingales and stochastic integrals. Apart from a few exceptions essentially concerning diffusion processes, it is only Cited by: This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, the measure-theoretic foundations of probability theory, weak convergence of probability measures, and the central limit theorem. A mathematical proof is an inferential argument for a mathematical statement, showing that the stated assumptions logically guarantee the argument may use other previously established statements, such as theorems; but every proof can, in principle, be constructed using only certain basic or original assumptions known as axioms, along with the accepted rules of inference. Mathematics of Chance utilizes simple, real-world problems-some of which have only recently been solved-to explain fundamental probability theorems, methods, and statistical reasoning. Jiri Andel begins with a basic introduction to probability theory and its important points before moving on to more specific sections on vital aspects of probability, using both classic and modern problems. Results are obtained concerning the transition probabilities and absorption probabilities of θ(t). The limiting distribution of (2 −1 log t) − 1 θ (t) is found to be the Cauchy distribution. This problem has also been considered by P. Lévy, who showed that the distribution of θ (t) must have infinite by: Some of the most momentous theorems that have a very central role and widespread applications in probability, statistics, and other branches of knowledge are concerning limit theorems. Among those theorems, probably various versions of the laws of large numbers and the central limit theorem are the most prominent : Saeed Ghahramani. Chapter 2Discrete Random Variables and Probability Distributions At this point, we have considered discrete sample spaces and we have derived theorems concerning probabilities for any discrete sample space and - Selection from Probability: An Introduction with Statistical Applications, 2nd Edition [Book]. On Tauberian theorems in probability theory. In book: Probability Measures on Groups IX, pp is a random variable N(x) and theorems concerning N(x) are renewal theorems. Author: Nicholas Bingham. The book contains examples as varied as politics, wine ratings, and school grades to show how a misunderstanding of probability causes people to misinterpret random events. Mlodinow’s three laws of probability are as follows: The probability that two events will both occur can never be greater than the probability that each will occur. The new organization presents information in a logical, easy-to-grasp sequence, incorporating the latest trends and scholarship in the field of probability and statistical ed coverage of probability and statistics includes:; Five chapters that focus on probability and probability distributions, including discrete data, order statistics, multivariate distributions, and normal. Henry McKean’s new book Probability: The Classical Limit Theorems packs a great deal of material into a moderate-sized book, starting with a synopsis of measure theory and ending with a taste of current research into random matrices and number theory. The book ranges more widely than the title might suggest. The classical limit theorems of probability — the weak and strong laws of large. Set books The notes cover only material in the Probability I course. The text-books listed below will be useful for other courses on probability and statistics. You need at most one of the three textbooks listed below, but you will need the statistical tables. • Probability and File Size: KB. At undergraduate level, it is interesting to work with the moment generating function and state the above theorem without proving it. The proof requires far more advanced mathematics than undergraduate level. Strong Approximations in Probability and Statistics presents strong invariance type results for partial sums and empirical processes of independent and identically distributed random variables (IIDRV). This seven-chapter text emphasizes the applicability of strong approximation methodology to a variety of problems of probability and statistics. famous text An Introduction to Probability Theory and Its Applications (New York: Wiley, ). In the preface, Feller wrote about his treatment of fluctuation in coin tossing: “The results are so amazing and so at variance with common intuition that even sophisticated colleagues doubted that coins actually misbehave as theory by: concerning the elements of plane geometry. We will call them, therefore, the plane axioms of group I, in order to distinguish them from the axioms I, 3–7, which we will designate briefly as the space axioms of this group. Of the theorems which follow from the axioms I. 1 Axioms of Probability 1 Introduction 1 Sample Space and Events 3 Axioms of Probability 11 Basic Theorems 18 Continuity of Probability Function 27 Probabilities 0 and 1 29 Random Selection of Points from Intervals 30 Review Problems 35. 2 Combinatorial Methods 38 Introduction 38 Counting Principle 38File Size: 4MB. Conditional probability: Abstract visualization and coin example Note, A ⊂ B in the right-hand figure, so there are only two colors shown. The formal definition of conditional probability catches the gist of the above example and. visualization. Formal definition of conditional probability. Let A and B be Size: KB. UNESCO – EOLSS SAMPLE CHAPTERS PROBABILITY AND STATISTICS – Vol. I - Limit Theorems of Probability Theory - G. Christoph ©Encyclopedia of Life Support Systems (EOLSS) 1. Introduction and Preliminaries Probability theory is motivated by the idea, that the unknown probability p of an event A is approximately equal to r /n, if n trials result in r realisation of the event A, and the. Note that often probability spaces are defined such that the algebra of subsets is a sigma-algebra. We shall revisit these concept later, and restrict ourselves to the above definition, which seems to capture the intuitive concept of probability quite well. Elementary theorems. Limit Theorems In this section, we will discuss two important pdf in probability, the law of large numbers (LLN) and the central limit theorem (CLT). The LLN basically states that the average of a large number of i.i.d. random variables converges to the expected value.fundamentals of probability theory required in the remainder of the book. Since most of the technical mathematics problems in probability relate to integration, Biihlmann has thoughtfully provided an appendix in which some of the principal definitions and theorems concerning the generalized.2 Sample Space and Probability Chap. 1 ebook is a very useful concept, but can be interpreted in a number of ways. As an illustration, consider the following. A patient is admitted to the hospital and a potentially life-saving drug isCited by:
2020-09-20 03:41:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7291708588600159, "perplexity": 1131.386641284143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400193391.9/warc/CC-MAIN-20200920031425-20200920061425-00614.warc.gz"}
http://math.stackexchange.com/questions/154536/counting-question-on-permutation-matrices-with-rotation-and-imprinting
# Counting question on permutation matrices with rotation and imprinting Please read question of distinct permutation matrices with rotation at first, then new counting questions are below: 1. For a distinct $N\times N$ zero-symmetry permutation matrix, we could rotate it 3 times and imprint all 4 images into a single canvas. Then there would be at most $4\times N$ cells selected in the final imprinted matrix. I called such matrix is Dispersed. How many Dispersed matrices for $N\times N$ permutation matrices? 2. In the set of above imprinted matrices, some are same. Then how many distinct imprinted matrices? Some examples of imprinted images for $4\times 4$ matrices (Please only look at row 2,3,5 and 6 which are zero-symmetry matrices. The images in left side are original, and in right side are imprinted ones. In special, images in row 5 and 6 are Dispersed): - Is this question really so hard? –  gnozil Jun 26 '12 at 7:38
2014-09-21 12:14:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7269982695579529, "perplexity": 1594.4738891893198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657135549.24/warc/CC-MAIN-20140914011215-00284-ip-10-234-18-248.ec2.internal.warc.gz"}
https://ojmo.centre-mersenne.org/articles/10.5802/ojmo.13/
# Open Journal of Mathematical Optimization Characterizations of Stability of Error Bounds for Convex Inequality Constraint Systems Open Journal of Mathematical Optimization, Volume 3 (2022), article no. 2, 17 p. In this paper, we mainly study error bounds for a single convex inequality and semi-infinite convex constraint systems, and give characterizations of stability of error bounds via directional derivatives. For a single convex inequality, it is proved that the stability of local error bounds under small perturbations is essentially equivalent to the non-zero minimum of the directional derivative at a reference point over the unit sphere, and the stability of global error bounds is proved to be equivalent to the strictly positive infimum of the directional derivatives, at all points in the boundary of the solution set, over the unit sphere as well as some mild constraint qualification. When these results are applied to semi-infinite convex constraint systems, characterizations of stability of local and global error bounds under small perturbations are also provided. In particular such stability of error bounds is proved to only require that all component functions in semi-infinite convex constraint systems have the same linear perturbation. Our work demonstrates that verifying the stability of error bounds for convex inequality constraint systems is, to some degree, equivalent to solving convex minimization problems (defined by directional derivatives) over the unit sphere. Revised: Accepted: Published online: DOI: 10.5802/ojmo.13 Keywords: Local and global error bounds, Stability, Convex inequality, Semi-infinite convex constraint systems, Directional derivative Zhou Wei 1, 2; Michel Théra 3; Jen-Chih Yao 4 1 College of Mathematics and Information Science, Hebei University, Baoding 071002, China 2 Department of Mathematics, Yunnan University, Kunming 650091, China 3 XLIM UMR-CNRS 7252, Université de Limoges, Limoges, France Federation University Australia, Ballarat 4 Research Center for Interneural Computing, China Medical University Hospital China Medical University, Taichung 40402, Taiwan @article{OJMO_2022__3__A2_0, author = {Zhou Wei and Michel Th\'era and Jen-Chih Yao}, title = {Characterizations of {Stability} of {Error} {Bounds} for {Convex} {Inequality} {Constraint} {Systems}}, journal = {Open Journal of Mathematical Optimization}, eid = {2}, publisher = {Universit\'e de Montpellier}, volume = {3}, year = {2022}, doi = {10.5802/ojmo.13}, language = {en}, url = {https://ojmo.centre-mersenne.org/articles/10.5802/ojmo.13/} } TY - JOUR TI - Characterizations of Stability of Error Bounds for Convex Inequality Constraint Systems JO - Open Journal of Mathematical Optimization PY - 2022 DA - 2022/// VL - 3 PB - Université de Montpellier UR - https://ojmo.centre-mersenne.org/articles/10.5802/ojmo.13/ UR - https://doi.org/10.5802/ojmo.13 DO - 10.5802/ojmo.13 LA - en ID - OJMO_2022__3__A2_0 ER - %0 Journal Article %T Characterizations of Stability of Error Bounds for Convex Inequality Constraint Systems %J Open Journal of Mathematical Optimization %D 2022 %V 3 %I Université de Montpellier %U https://doi.org/10.5802/ojmo.13 %R 10.5802/ojmo.13 %G en %F OJMO_2022__3__A2_0 Zhou Wei; Michel Théra; Jen-Chih Yao. Characterizations of Stability of Error Bounds for Convex Inequality Constraint Systems. Open Journal of Mathematical Optimization, Volume 3 (2022), article no. 2, 17 p. doi : 10.5802/ojmo.13. https://ojmo.centre-mersenne.org/articles/10.5802/ojmo.13/ [1] Malek Abbasi; Michel Théra Strongly regular points of mappings, Fixed Point Theory Algorithms Sci. Eng., Volume 2021 (2021), 14 | Article | MR: 4310846 | Zbl: 07383627 [2] A. A. Auslender; Jean-Pierre Crouzeix Global regularity theorems, Math. Oper. Res., Volume 13 (1988) no. 2, pp. 243-253 | Article | MR: 942616 | Zbl: 0655.90059 [3] Dominique Azé A survey on error bounds for lower semicontinuous functions, ESAIM, Proc., Volume 13 (2003), pp. 1-17 (Proceedings of 2003 MODE-SMAI Conference) | MR: 2160877 | Zbl: 1037.49009 [4] Dominique Azé A unified theory for metric regularity of multifunctions, J. Convex Anal., Volume 13 (2006) no. 2, pp. 225-252 | MR: 2252230 | Zbl: 1101.49013 [5] Dominique Azé; Jean-Noël Corvellec On the sensitivity analysis of Hoffman constants for systems of linear inequalities, SIAM J. Optim., Volume 12 (2002) no. 4, pp. 913-927 | MR: 1922502 [6] Dominique Azé; Jean-Noël Corvellec Characterizations of error bounds for lower semicontinuous functions on metric spaces, ESAIM, Control Optim. Calc. Var., Volume 10 (2004), pp. 409-425 | Numdam | MR: 2084330 | Zbl: 1085.49019 [7] Heinz H. Bauschke; Jonathan M. Borwein On projection algorithms for solving convex feasibility problems, SIAM Rev., Volume 38 (1996) no. 3, pp. 367-426 | Article | MR: 1409591 | Zbl: 0865.47039 [8] Amir Beck; Marc Teboulle Convergence rate analysis and error bounds for projection algorithms in convex feasibility problems, Optim. Methods Softw., Volume 18 (2003) no. 4, pp. 377-394 | Article | MR: 2019035 | Zbl: 1060.90060 [9] Ewa M. Bednarczuk; Alexander Y. Kruger Error bounds for vector-valued functions: necessary and sufficient conditions, Nonlinear Anal., Theory Methods Appl., Volume 75 (2012) no. 3, pp. 1124-1140 | Article | MR: 2861324 | Zbl: 1236.49032 [10] James V. Burke; Sien Deng Weak sharp minima revisited. I. Basic theory, Control Cybern., Volume 31 (2002) no. 3, pp. 439-469 | MR: 1978735 | Zbl: 1105.90356 [11] James V. Burke; Sien Deng Weak sharp minima revisited. II. Application to linear regularity and error bounds, Math. Program., Volume 104 (2005) no. 2-3, pp. 235-261 | Article | MR: 2179237 | Zbl: 1124.90349 [12] María J. Cánovas; Alexander Y. Kruger; Marco A. López; Juan Parra; Michel Théra Calmness modulus of linear semi-infinite programs, SIAM J. Optim., Volume 24 (2014) no. 1, pp. 29-48 | Article | MR: 3148091 | Zbl: 1374.90391 [13] Patrick L. Combettes Hilbertian convex feasibility problem: convergence of projection methods, Appl. Math. Optim., Volume 35 (1997) no. 3, pp. 311-330 | Article | MR: 1431803 | Zbl: 0872.90069 [14] Jean-Noël Corvellec; Viorica V. Motreanu Nonlinear error bounds for lower semicontinuous functions on metric spaces, Math. Program., Volume 114 (2008) no. 2, pp. 291-319 | Article | MR: 2393044 | Zbl: 1190.90206 [15] Nguyen Duy Cuong; Alexander Y. Kruger Error bounds revisited (2020) (https://arxiv.org/abs/2012.03941v1) [16] Sien Deng Perturbation analysis of a condition number for convex inequality systems and global error bounds for analytic systems, Math. Program., Volume 83 (1998) no. 2, pp. 263-276 | Article | MR: 1647861 | Zbl: 0920.90116 [17] Asen L. Dontchev; Adrian S. Lewis; Ralph T. Rockafellar The radius of metric regularity, Trans. Am. Math. Soc., Volume 355 (2003) no. 2, pp. 493-517 | Article | MR: 1932710 | Zbl: 1042.49026 [18] Marian J. Fabian; René Henrion; Alexander Y. Kruger; Jiří V. Outrata Error bounds: necessary and sufficient conditions, Set-Valued Var. Anal., Volume 18 (2010) no. 2, pp. 121-149 | Article | MR: 2645246 | Zbl: 1192.49018 [19] Helmut Gfrerer First order and second order characterizations of metric subregularity and calmness of constraint set mappings, SIAM J. Optim., Volume 21 (2011) no. 4, pp. 1439-1474 | Article | MR: 2869504 | Zbl: 1254.90246 [20] Osman Güler Augmented Lagrangian algorithms for linear programming, J. Optim. Theory Appl., Volume 75 (1992) no. 3, pp. 445-478 | Article | MR: 1194825 [21] Robert Hesse; D. Russell Luke Nonconvex notions of regularity and convergence of fundamental algorithms for feasibility problems, SIAM J. Optim., Volume 23 (2013) no. 4, pp. 2397-2419 | Article | MR: 3134441 | Zbl: 1288.65094 [22] A. J. Hoffman On approximate solutions of systems of linear inequalities, J. Res. Nat. Bur. Standards, Volume 49 (1952), pp. 263-265 | Article | MR: 51275 [23] L. R. Huang; Kung Fu Ng On first- and second-order conditions for error bounds, SIAM J. Optim., Volume 14 (2004) no. 4, pp. 1057-1073 | Article | MR: 2112964 [24] Alexander D. Ioffe Theory of Extremal Problems, Studies in Mathematics and its Applications, 6, North-Holland, 1979 [25] Alexander D. Ioffe Metric regularity – a survey. I: Theory, J. Aust. Math. Soc., Volume 101 (2016) no. 2, pp. 188-243 | Article | MR: 3569064 | Zbl: 1369.49001 [26] Alexander D. Ioffe Metric regularity – a survey. II: Applications, J. Aust. Math. Soc., Volume 101 (2016) no. 3, pp. 376-417 | Article | MR: 3570080 | Zbl: 1369.49002 [27] Alexander D. Ioffe Variational analysis of regular mappings. Theory and applications, Springer Monographs in Mathematics, Springer, 2017 | Article [28] Alfredo N. Iusem; Alvaro R. De Pierro On the convergence properties of Hildreth’s quadratic programming algorithm, Math. Program., Volume 47 (1990) no. 1, pp. 37-51 | Article | MR: 1054840 | Zbl: 0712.90054 [29] Abderrahim Jourani Hoffman’s error bound, local controllability, and sensitivity analysis, SIAM J. Control Optimization, Volume 38 (2000) no. 3, pp. 947-970 | Article | MR: 1756902 | Zbl: 0945.46023 [30] Diethard Klatte; Wu Li Asymptotic constraint qualifications and global error bounds for convex inequalities, Math. Program., Volume 84 (1999) no. 1, pp. 137-160 | Article | MR: 1687264 | Zbl: 1050.90557 [31] Alexander Y. Kruger Error bounds and Hölder metric subregularity, Set-Valued Var. Anal., Volume 23 (2015) no. 4, pp. 705-736 | Article | Zbl: 1330.49012 [32] Alexander Y. Kruger Error bounds and metric subregularity, Optimization, Volume 64 (2015) no. 1, pp. 49-79 | Article | MR: 3293540 | Zbl: 1311.49043 [33] Alexander Y. Kruger; Marco A. López; Michel Théra Perturbation of error bounds, Math. Program., Volume 168 (2018) no. 1-2, pp. 533-554 | Article | MR: 3767757 | Zbl: 1391.49051 [34] Alexander Y. Kruger; Marco A. López; Xiaoqi Yang; Jiangxing Zhu Hölder error bounds and Hölder calmness with applications to convex semi-infinite optimization, Set-Valued Var. Anal., Volume 27 (2019) no. 4, pp. 995-1023 | Article | Zbl: 1430.49014 [35] Alexander Y. Kruger; Huynh Van Ngai; Michel Théra Stability of error bounds for convex constraint systems in Banach spaces, SIAM J. Optim., Volume 20 (2010) no. 6, pp. 3280-3296 | Article | MR: 2735954 | Zbl: 1208.49030 [36] Adrian S. Lewis; Jong-Shi Pang Error bounds for convex inequality systems, Generalized Convexity, Generalized Monotonicity: Recent Results (Luming, 1996) (Nonconvex Optimization and Its Applications), Volume 27, Kluwer Academic Publishers, 1996, pp. 75-100 | Article | Zbl: 0953.90048 [37] D. Russell Luke; H. Thao Nguyen; Matthew K. Tam Implicit error bounds for Picard iterations on Hilbert spaces, Vietnam J. Math., Volume 46 (2018) no. 2, pp. 243-258 | Article | MR: 3796395 | Zbl: 1391.49025 [38] Zhi-Quan Luo; Paul Tseng On a global error bound for a class of monotone affine variational inequality problems, Oper. Res. Lett., Volume 11 (1992) no. 3, pp. 159-165 | MR: 1179792 | Zbl: 0777.49009 [39] Zhi-Quan Luo; Paul Tseng Perturbation analysis of a condition number for linear systems, SIAM J. Matrix Anal. Appl., Volume 15 (1994) no. 2, pp. 636-660 | MR: 1266608 | Zbl: 0799.65063 [40] Olvi L. Mangasarian A condition number for differentiable convex inequalities, Math. Oper. Res., Volume 10 (1985), pp. 175-179 | Article | MR: 793875 | Zbl: 0565.90059 [41] Kung Fu Ng; Xi Yin Zheng Error bounds for lower semicontinuous functions in normed spaces, SIAM J. Optim., Volume 12 (2001) no. 1, pp. 1-17 | MR: 1870584 | Zbl: 1040.90041 [42] Huynh Van Ngai; Alexander Y. Kruger; Michel Théra Stability of error bounds for semi-infinite convex constraint systems, SIAM J. Optim., Volume 20 (2080) no. 4, pp. 2080-2096 | Article | MR: 2630034 | Zbl: 1202.49024 [43] Huynh Van Ngai; Michel Théra Error bounds for systems of lower semicontinuous functions in Asplund spaces, Math. Program., Volume 116 (2009) no. 1-2, pp. 397-427 | Article | MR: 2421287 | Zbl: 1215.49028 [44] Jong-Shi Pang Error bounds in mathematical programming, Math. Program., Volume 79 (1997) no. 1-3, pp. 299-332 | Article | MR: 1464772 | Zbl: 0887.90165 [45] Jean-Paul Penot Calculus without derivatives, Graduate Texts in Mathematics, 266, Springer, 2013 | Article [46] Robert R. Phelps Convex functions, Monotone Operators and Differentiability, Lecture Notes in Mathematics, 1364, Springer, 1993 | MR: 1238715 [47] Stephen M. Robinson Bounds for error in the solution set of a perturbed linear program, Linear Algebra Appl., Volume 6 (1973), pp. 69-81 | Article | MR: 317760 | Zbl: 0283.90028 [48] Stephen M. Robinson An application of error bounds for convex programming in a linear space, SIAM J. Control, Volume 13 (1975), pp. 271-273 | Article | MR: 385671 | Zbl: 0297.90072 [49] Stephen M. Robinson A characterization of stability in linear programming, Oper. Res., Volume 25 (1977), pp. 435-447 | Article | MR: 446490 | Zbl: 0373.90045 [50] Ralph T. Rockafellar Convex Analysis, Princeton Mathematical Series, 28, Princeton University Press, 1970 | Article [51] Paul Tseng; Dimitri P. Bertsekas On the convergence of the exponential multiplier method for convex programming, Math. Program., Volume 60 (1993) no. 1, pp. 1-19 | Article | MR: 1231274 | Zbl: 0783.90101 [52] Zili Wu; Jane J. Ye On error bounds for lower semicontinuous functions, Math. Program., Volume 92 (2002) no. 2, pp. 301-314 | MR: 1901263 | Zbl: 1041.90053 [53] Xi Yin Zheng; Kung Fu Ng Perturbation analysis of error bounds for systems of conic linear inequalities in Banach spaces, SIAM J. Optim., Volume 15 (2005) no. 4, pp. 1026-1041 | Article | MR: 2178486 | Zbl: 1114.90134 [54] Xi Yin Zheng; Kung Fu Ng Metric subregularity and calmness for nonconvex generalized equations in Banach spaces, SIAM J. Optim., Volume 20 (2010) no. 5, pp. 2119-2136 | Article | MR: 2650841 | Zbl: 1229.90220 [55] Xi Yin Zheng; Kung Fu Ng Metric subregularity for proximal generalized equations in Hilbert spaces, Nonlinear Anal., Theory Methods Appl., Volume 75 (2012) no. 3, pp. 1686-1699 | Article | MR: 2861367 | Zbl: 1254.90248 [56] Xi Yin Zheng; Zhou Wei Perturbation analysis of error bounds for quasi-subsmooth inequalities and semi-infinite constraint systems, SIAM J. Optim., Volume 22 (2012) no. 1, pp. 41-65 | Article | MR: 2902684 | Zbl: 1250.49019 Cited by Sources:
2022-06-28 06:54:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5379669070243835, "perplexity": 5126.199844343493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103355949.26/warc/CC-MAIN-20220628050721-20220628080721-00410.warc.gz"}
https://www.zbmath.org/authors/?q=ai%3Abiswas.sujay-kr
## Biswas, Sujay Kr. Compute Distance To: Author ID: biswas.sujay-kr Published as: Biswas, Sujay Kr.; Biswas, Sujay Documents Indexed: 51 Publications since 1980 Co-Authors: 2 Co-Authors with 5 Joint Publications 71 Co-Co-Authors all top 5 ### Co-Authors 2 single-authored 7 Modak, Bijan 6 Arvind, Vikraman 5 Nayek, P. 3 Chakraborty, Subenoy 3 Kamilya, Supreeti 2 Das, Alaka 2 De, Bhaskar 2 Sarkar, N. G. 1 Athre, K. 1 Bhattacharjee, G. P. 1 Ghatak, Kamakhya Prasad 1 Grover, Gurprit 1 Guha, J. 1 Guptaroy, P. 1 Hirani, Harish 1 Jayaram, V. 1 Kapur, P. K. 1 Kumar, S. 1 Mahapatra, Ghanshaym Singha 1 Malhotra, Vishv Mohan 1 Mallik, Awadesh K. 1 Mehta, Y. K. 1 Narasimhan, Priya 1 Pal, Debkumar 1 Rajendrakumar, P. K. 1 Samanta, Guru Prasad 1 Sau, Goutam 1 Shivashankar, S. A. 1 Siracusa, R. 1 Sivaram, C. 1 Thanh Tung Nguyen 1 Varshney, Manoj Kumar all top 5 ### Serials 11 General Relativity and Gravitation 6 International Journal of Theoretical Physics 4 International Journal of Modern Physics D 3 International Journal of Modern Physics A 2 International Journal of Mechanical Sciences 2 IAPQR Transactions 1 Modern Physics Letters A 1 Classical and Quantum Gravity 1 International Journal of Control 1 Information Processing Letters 1 Physics Letters. A 1 Biometrical Journal 1 Mechanics Research Communications 1 Theoretical Computer Science 1 International Journal of Foundations of Computer Science 1 International Journal of Computer Mathematics 1 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 1 International Applied Mechanics 1 Applied Mechanics and Engineering 1 Far East Journal of Theoretical Statistics 1 Wireless Networks 1 Physica Scripta 1 Sādhanā 1 International Journal of Geometric Methods in Modern Physics 1 Journal of Applied Nonlinear Dynamics all top 5 ### Fields 20 Relativity and gravitational theory (83-XX) 14 Quantum theory (81-XX) 9 Computer science (68-XX) 5 Mathematical logic and foundations (03-XX) 5 Mechanics of deformable solids (74-XX) 4 Dynamical systems and ergodic theory (37-XX) 4 Statistics (62-XX) 3 Differential geometry (53-XX) 3 Biology and other natural sciences (92-XX) 2 Ordinary differential equations (34-XX) 2 Probability theory and stochastic processes (60-XX) 2 Fluid mechanics (76-XX) 2 Statistical mechanics, structure of matter (82-XX) 1 Combinatorics (05-XX) 1 Calculus of variations and optimal control; optimization (49-XX) 1 Numerical analysis (65-XX) 1 Operations research, mathematical programming (90-XX) ### Citations contained in zbMATH Open 24 Publications have been cited 70 times in 9 Documents Cited by Year Particle production in de Sitter space. Zbl 0824.53085 Biswas, S.; Guha, J.; Sarkar, N. G. 1995 Interacting dark energy in $$f(T)$$ cosmology: a dynamical system analysis. Zbl 1320.83048 Biswas, Sujay Kr.; Chakraborty, Subenoy 2015 An $$O(n^ 2)$$ algorithm for the satisfiability problem of a subset of propositional sentences in CNF that includes all Horn sentences. Zbl 0635.68108 Arvind, V.; Biswas, S. 1987 Evolution of dynamical coupling in scalar tensor theory from Noether symmetry. Zbl 0972.83059 Modak, B.; Kamilya, S.; Biswas, S. 2000 The complex time WKB approximation and particle production. Zbl 0982.83043 Biswas, S.; Shaw, A.; Modak, B. 2000 Induced gravity theory from Noether symmetry. Zbl 1048.83033 Kamilya, S.; Modak, B.; Biswas, S. 2004 The CWKB particle production and classical condensate in de Sitter space-time. Zbl 1115.83017 Biswas, S.; Chowdhury, I. 2006 Particle production in de Sitter space-time. Zbl 0952.81024 Sarkar, N. G.; Biswas, S. 2000 Quantum gravity equation in Schrödinger form in minisuperspace description. Zbl 0988.83029 Biswas, S.; Shaw, A.; Modak, B.; Biswas, D. 2000 Decoherence in the Starobinsky model. Zbl 0933.83039 Biswas, S.; Shaw, A.; Modak, B. 1999 Particle production in expanding spacetime. Zbl 1011.83044 Biswas, S.; Shaw, A.; Misra, P. 2002 Dynamical systems analysis of an interacting dark energy model in the brane scenario. Zbl 1317.83098 Biswas, Sujay Kr.; Chakraborty, Subenoy 2015 Kernel constructible languages. Zbl 0532.68077 Arvind, V.; Biswas, S. 1983 Optimal control using an algebraic method for control-affine non-linear systems. Zbl 1194.49036 Won, C.-H.; Biswas, S. 2007 A finite element study of the indentation mechanics of an adhesively bonded layered solid. Zbl 0899.73535 Narasimhan, R.; Biswas, S. K. 1998 A Noether symmetry study in scalar-tensor theory. Zbl 0931.83010 Modak, B.; Kamilya, S.; Biswas, S. 1998 Validation of stresses and stress intensity factor in a notched bilayer system under four point bending, as determined by the solution of the Navier’s equation. Zbl 1192.74142 Mukherjee, S.; Jayaram, V.; Biswas, S. K. 2006 The CWKB method of particle production in a periodic potential. Zbl 1021.83030 Biswas, S.; Misra, P.; Chowdhury, I. 2003 Does operator ordering need wormhole dominance in the wavefunction of the universe? Zbl 1015.83046 Biswas, S.; Chowdhury, I.; Misra, P. 2003 Elastic contact between a cylindrical surface and a flat surface. – A non-Hertzian model of multi-asperity contact. Zbl 0894.73140 Rajendrakumar, P. K.; Biswas, S. K. 1996 On some bandwidth restricted versions of the satisfiability problem of propositional CNF formulas. Zbl 0687.03018 Arvind, V.; Biswas, S. 1989 On certain bandwidth restricted versions of the satisfiability problem of propositional CNF formulas. Zbl 0634.03032 Arvind, V.; Biswas, S. 1987 Schrödinger-Wheeler-DeWitt equation in multidimensional cosmology. Zbl 1155.83357 Biswas, S.; Shaw, A.; Biswas, D. 2001 Time in quantum gravity. Zbl 1155.83333 Biswas, S.; Shaw, A.; Modak, B. 2001 Interacting dark energy in $$f(T)$$ cosmology: a dynamical system analysis. Zbl 1320.83048 Biswas, Sujay Kr.; Chakraborty, Subenoy 2015 Dynamical systems analysis of an interacting dark energy model in the brane scenario. Zbl 1317.83098 Biswas, Sujay Kr.; Chakraborty, Subenoy 2015 Optimal control using an algebraic method for control-affine non-linear systems. Zbl 1194.49036 Won, C.-H.; Biswas, S. 2007 The CWKB particle production and classical condensate in de Sitter space-time. Zbl 1115.83017 Biswas, S.; Chowdhury, I. 2006 Validation of stresses and stress intensity factor in a notched bilayer system under four point bending, as determined by the solution of the Navier’s equation. Zbl 1192.74142 Mukherjee, S.; Jayaram, V.; Biswas, S. K. 2006 Induced gravity theory from Noether symmetry. Zbl 1048.83033 Kamilya, S.; Modak, B.; Biswas, S. 2004 The CWKB method of particle production in a periodic potential. Zbl 1021.83030 Biswas, S.; Misra, P.; Chowdhury, I. 2003 Does operator ordering need wormhole dominance in the wavefunction of the universe? Zbl 1015.83046 Biswas, S.; Chowdhury, I.; Misra, P. 2003 Particle production in expanding spacetime. Zbl 1011.83044 Biswas, S.; Shaw, A.; Misra, P. 2002 Schrödinger-Wheeler-DeWitt equation in multidimensional cosmology. Zbl 1155.83357 Biswas, S.; Shaw, A.; Biswas, D. 2001 Time in quantum gravity. Zbl 1155.83333 Biswas, S.; Shaw, A.; Modak, B. 2001 Evolution of dynamical coupling in scalar tensor theory from Noether symmetry. Zbl 0972.83059 Modak, B.; Kamilya, S.; Biswas, S. 2000 The complex time WKB approximation and particle production. Zbl 0982.83043 Biswas, S.; Shaw, A.; Modak, B. 2000 Particle production in de Sitter space-time. Zbl 0952.81024 Sarkar, N. G.; Biswas, S. 2000 Quantum gravity equation in Schrödinger form in minisuperspace description. Zbl 0988.83029 Biswas, S.; Shaw, A.; Modak, B.; Biswas, D. 2000 Decoherence in the Starobinsky model. Zbl 0933.83039 Biswas, S.; Shaw, A.; Modak, B. 1999 A finite element study of the indentation mechanics of an adhesively bonded layered solid. Zbl 0899.73535 Narasimhan, R.; Biswas, S. K. 1998 A Noether symmetry study in scalar-tensor theory. Zbl 0931.83010 Modak, B.; Kamilya, S.; Biswas, S. 1998 Elastic contact between a cylindrical surface and a flat surface. – A non-Hertzian model of multi-asperity contact. Zbl 0894.73140 Rajendrakumar, P. K.; Biswas, S. K. 1996 Particle production in de Sitter space. Zbl 0824.53085 Biswas, S.; Guha, J.; Sarkar, N. G. 1995 On some bandwidth restricted versions of the satisfiability problem of propositional CNF formulas. Zbl 0687.03018 Arvind, V.; Biswas, S. 1989 An $$O(n^ 2)$$ algorithm for the satisfiability problem of a subset of propositional sentences in CNF that includes all Horn sentences. Zbl 0635.68108 Arvind, V.; Biswas, S. 1987 On certain bandwidth restricted versions of the satisfiability problem of propositional CNF formulas. Zbl 0634.03032 Arvind, V.; Biswas, S. 1987 Kernel constructible languages. Zbl 0532.68077 Arvind, V.; Biswas, S. 1983 all top 5 ### Cited by 21 Authors 2 Biswas, Sujay Kr. 2 Chakraborty, Subenoy 1 Bahamonde, Sebastian 1 Böhmer, Christian G. 1 Carloni, Sante 1 Chaubey, Raghavendra 1 Copeland, Edmund J. 1 Debnath, Ujjal 1 Fang, Wei 1 Mandal, Jyotirmay Das 1 Mirza, Behrouz 1 Oboudiat, Fatemeh 1 Odintsov, Sergei D. 1 Oikonomou, Vasilis K. 1 Petrot, Narin 1 Raushan, Rakesh 1 Salti, Mustafa 1 Sogut, Kenan 1 Tamanini, Nicola 1 Tangkhawiwetkul, Jittiporn 1 Yeter, U. all top 5 ### Cited in 8 Serials 2 International Journal of Geometric Methods in Modern Physics 1 Classical and Quantum Gravity 1 Physics Reports 1 International Journal of Modern Physics D 1 Journal of Cosmology and Astroparticle Physics 1 Thai Journal of Mathematics 1 Foundations of Physics 1 Advances in High Energy Physics all top 5 ### Cited in 9 Fields 7 Relativity and gravitational theory (83-XX) 3 Dynamical systems and ergodic theory (37-XX) 2 Astronomy and astrophysics (85-XX) 1 Ordinary differential equations (34-XX) 1 Operator theory (47-XX) 1 Differential geometry (53-XX) 1 Mechanics of particles and systems (70-XX) 1 Fluid mechanics (76-XX) 1 Quantum theory (81-XX)
2022-05-21 15:16:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6452793478965759, "perplexity": 12861.048720554974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539131.21/warc/CC-MAIN-20220521143241-20220521173241-00348.warc.gz"}
https://ltwork.net/which-of-the-following-is-a-difference-between-the-trapper--1212208
# Which of the following is a difference between the Trapper Mine in COlorado and other reclaimed coal mines?A) The land at the Trapper Mine in Colorado ###### Question: Which of the following is a difference between the Trapper Mine in COlorado and other reclaimed coal mines? A) The land at the Trapper Mine in Colorado is reclaimed without artificially applying water. B) The Trapper Mine is reclaimed as they are finishing one mine area, before opening a second mine area. C) The land at the Trapper Mine in Colorado is not reclaimed; they are allowing natural reclamation. D) The air and water quality, habitat and species diversity, and ozone and carbon dioxide equipment levels are all checked on a regular basis. E) All of the above ### Which verb agrees with the subject in the sentence? ben, who is a very talented artist, for an hour every evening. a. draws Which verb agrees with the subject in the sentence? ben, who is a very talented artist, for an hour every evening. a. draws b. draw c. drawn... ### What causes a surplus and how can it be fairly quickly resolved? what are the determinants of inelastic demand? What causes a surplus and how can it be fairly quickly resolved? what are the determinants of inelastic demand?... ### Plz me.. given the piecewise function shown below, select all of the statements that are true. Plz me.. given the piecewise function shown below, select all of the statements that are true.... ### 29. A government that plays a large role inits country's economy and sets a socialminimum by providing extensive welfareservices, such 29. A government that plays a large role in its country's economy and sets a social minimum by providing extensive welfare services, such as health care and education, would best be described as (WG.10A) A. Fascist B. Free Enterprise/Capitalism C. Socialist D. Communist... ### Classify the figure in as many ways as possible. (Select all that apply)A) RhombusB) RectangleC) ParallelogramD) SquareE) QuadrilateralF) Classify the figure in as many ways as possible. (Select all that apply) A) Rhombus B) Rectangle C) Parallelogram D) Square E) Quadrilateral F) Trapezoid $Classify the figure in as many ways as possible. (Select all that apply) A) Rhombus B) Rectangle$... ### The 5th paragraph is the one underneath the historic victories for union. The answer needs to be at least 2 sentences. Thank The 5th paragraph is the one underneath the historic victories for union. The answer needs to be at least 2 sentences. Thank you $The 5th paragraph is the one underneath the historic victories for union. The answer needs to be at$... ### Which location has the least elevation? Which location has the least elevation? $Which location has the least elevation?$... ### The genetic code is a sequence of dna nucleotides. in eukaryotic cells, the dna is located in the nuclei The genetic code is a sequence of dna nucleotides. in eukaryotic cells, the dna is located in the nuclei of the cells. the genetic code is nearly universal in that... ### = Knowledge Check Question 3 C Nicole is participating in a 5-day cross-country biking challenge. She biked for 48, 54, 47, and = Knowledge Check Question 3 C Nicole is participating in a 5-day cross-country biking challenge. She biked for 48, 54, 47, and 50 miles on the first four days. How many miles does she need to bike on the last day so that her average (mean) Is 51 miles per day?... ### How does 2 000 mgs. er metformin effect blood sugar? How does 2 000 mgs. er metformin effect blood sugar?... ### Prehistoric cave paintings were discovered in a cave in france. the paint contained 26 % of the original​ Prehistoric cave paintings were discovered in a cave in france. the paint contained 26 % of the original​ carbon-14. use the exponential decay model for​ carbon-14, upper a equals upper a 0 e superscript negative 0.000121 t​, to estimate the age of the paintings.... ### Kathy decided she would ignore her daughter's daily temper tantrums and after 1 week of this she found that her daughter's Kathy decided she would ignore her daughter's daily temper tantrums and after 1 week of this she found that her daughter's tantrums occurred at about half the rate as before. her daughter's tantrums were:... ### Find the exact perimeter of triangle ABC. example; 4 √5+5 can be shown as; 4 times the square root Find the exact perimeter of triangle ABC. example; 4 √5+5 can be shown as; 4 times the square root of 5 + 5 Please write the answer like the example above $Find the exact perimeter of triangle ABC. example; 4 √5+5 can be shown as; 4 times the square root o$... ### The art of using and understanding space involves a. varying the size of objects or subjects, so that some are smaller than The art of using and understanding space involves a. varying the size of objects or subjects, so that some are smaller than others and the illusion of depth is created. b. showing a change from light to dark in an object by darkening areas that would have a shadow. c. placing part of one shape or f... ### Use an integer to express the number representing a change.From 2011 to 2012, attendance at a sports game went from 49,552 Use an integer to express the number representing a change. From 2011 to 2012, attendance at a sports game went from 49,552 to 47,186, a decrease of 2,366 The integer is ?... ###  In the expression 9z^4-8+15(y^3+6), name the two factors of the term 15(y^3+6)  In the expression 9z^4-8+15(y^3+6), name the two factors of the term 15(y^3+6)... ### Identify the main accomplishment between 1910 and 1914 and purpose of the text: Identify the main accomplishment between 1910 and 1914 and purpose of the text:...
2022-12-09 06:10:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3080137073993683, "perplexity": 2833.1182918094682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711390.55/warc/CC-MAIN-20221209043931-20221209073931-00561.warc.gz"}
http://hernanifaustino.com/articles/19b191-recursive-least-squares-c
# recursive least squares c December 2, 2020 Appl. 349-353. If you're using this code in a publication, please cite our paper. View Record in Scopus Google Scholar. ���H'F�V��w�����#S����s���娴2|8�F����U��\o�hs�!6jk/a*�Fn��7k> I'm trying to implement multi-channelt lattice RLS, i.e. A considerable improvement in performance compared to LORETA was found when dynamic LORETA was applied to simulated EEG data, and the new … 0.0. Abstract: Conventional Recursive Least Squares (RLS) filters have a complexity of 1.5L 2 products per sample, where L is the number of parameters in the least squares model. Recursive Least Square with multiple forgetting factors accounts for different rates of change for different parameters and thus, enables simultaneous estimation of the time-varying grade and the piece-wise constant mass. Create System object for online parameter estimation using recursive least squares algorithm of a system with two parameters and known initial parameter values. ��-9.��&qU ^c�Ɠ&�b�j%�m9>Ǝ The Recursive Least Squares Estimator estimates the parameters of a system using a model that is linear in those parameters. 0 Ratings. The Digital Signal Processing Handbook, pages 21–1, 1998. 53 Downloads. The matrix-inversion-lemma based recursive least squares (RLS) approach is of a recursive form and free of matrix inversion, and has excellent performance regarding computation and memory in solving the classic least-squares (LS) problem. Once initialized, no matrix inversion is needed. MandicThe widely linear quaternion recursive least squares filter Proceedings of the Second International Workshop Cognitive Information Processing (CIP) … So far, we have considered the least squares solution to a particularly simple es- 3 timation problem in a single unknown parameter. F. Ding, T. Chen, L. QiuBias compensation based recursive least squares identification algorithm for MISO systems. Do we have to recompute everything each time a new data point comes in, or can we write our new, updated estimate in terms of our old estimate? RLS-RTMDNet is dedicated to improving online tracking part of RT-MDNet (project page and paper) based on our proposed recursive least-squares estimator-aided online learning method. Computationally very efficient. Lecture Series on Adaptive Signal Processing by Prof.M.Chakraborty, Department of E and ECE, IIT Kharagpur. It is important to generalize RLS for generalized LS (GLS) problem. Matrices stay the same size all the time. It can be shown that by initialising w 0 = 0 ∈ R d {\displaystyle \textstyle w_{0}=0\in \mathbb {R} ^{d}} and Γ 0 = I ∈ R d × d {\displaystyle \textstyle \Gamma _{0}=I\in \mathbb {R} ^{d\times d}} , the solution of the linear least … Adaptive RLS filter. Computer exercise 5: Recursive Least Squares (RLS) This computer exercise deals with the RLS algorithm. A description can be found in Haykin, edition 4, chapter 5.7, pp. ¶Ä:‰U)ÝMûç;ؐM#µ]©'ððzÞgÆcÎنùÇKöluµL0Š­Ö,ӌdˆlõâs$⯫7WdÈ!ËE¢´‚. ECG artifacts were estimated and … An ad-hoc modification of the update law for the gain in the RLS scheme is proposed and used in simulation and experiments. Code Explanation ¶ class padasip.filters.rls.FilterRLS (n, mu=0.99, eps=0.1, w='random') [source] ¶ Bases: padasip.filters.base_filter.AdaptiveFilter. Recursive least-squares adaptive filters. Recursive least-squares adaptive filters. RLS-RTMDNet. Circ. Figure 3 defines the processing cells which are required when the systolic array in figure 1 is used to carry out recursive least- squares minimization using square -root free In this case each boundary cell (corresponding to its location) stores Givens rotations. An Implementation Issue ; Interpretation; What if the data is coming in sequentially? A least squares solution to the above problem is, 2 ˆ mindUWˆ W-Wˆ=(UHU)-1UHd Let Z be the cross correlation vector and Φbe the covariance matrix. Recursive Least Squares Parameter Estimation for Linear Steady State and Dynamic Models Thomas F. Edgar Department of Chemical Engineering University of Texas Austin, TX 78712 1. To obtain improved inverse solutions, dynamic LORETA exploits both spatial and temporal information, whereas LORETA uses only spatial information. I have the basic RLS algorithm working with multiple components, but it's too inefficient and memory intensive for my purpose. This section shows how to recursively compute the weighted least squares estimate. More specifically, suppose we have an estimate x˜k−1 after k − 1 measurements, and obtain a new mea-surement yk. Computer exercise 5: Recursive Least Squares (RLS) This computer exercise deals with the RLS algorithm. We refer to y k−H x˜ −1 as the correctionterm. Syst. . Such a system has the following form: y … It is also a crucial piece of information for helping improve state of charge (SOC) estimation, health prognosis, and other related tasks in the battery management system (BMS). . The analytical solution for the minimum (least squares) estimate is pk, bk are functions of the number of samples This is the non-sequential form or non-recursive form 1 2 * 1 1 ˆ k k k i i i i i pk bk a x x y − − − = ∑ ∑ Simple Example (2) 4 ���te�6�1=��\�*X�?���a1�E'�q��$ރV�Gk�o����L�Ȭ�n%�e�d�Wk�a%��_�0��d�.�B�֘2�0 I need a recursive least squares (RLS) implementation written in ANSI C for online system identification purposes. To be general, every measurement is now an m-vector with values yielded by, say, several measuring instruments. RLS-RTMDNet is dedicated to improving online tracking part of RT-MDNet (project page and paper) based on our proposed recursive least-squares … WZ UU ZUd ˆ1 =F-F= = H H The above equation could be solved block by block basis but we are interested in recursive determination of tap weight estimates w. RECURSIVE LEAST SQUARES 8.1 Recursive Least Squares Let us start this section with perhaps the simplest application possible, nevertheless introducing ideas. %�쏢 (6) Here Hk is an m×n matrix, and Kk is n×m and referred to as the estimatorgainmatrix. Ali H Sayed and Thomas Kailath. I'm trying to implement multi-channelt lattice RLS, i.e. An alternative form, useful for deriving recursive least-squares is obtained when B and C are n×1 and 1×n (i.e. –The RLS algorithm solves the least squares problem recursively –At each iteration when new data sample is available the filter tap weights are updated –This leads to savings in computations –More rapid convergence is also achieved A more general problem is the estimation of the n unknown parameters aj , j = 1, 2, . 412-421), … Active 4 years, 8 months ago. Two recursive (adaptive) flltering algorithms are compared: Recursive Least Squares (RLS) and (LMS). IEEE Trans. In this study, a recursive least square (RLS) notch filter was developed to effectively suppress electrocardiogram (ECG) artifacts from EEG recordings. 5 The Recursive Least Squares Filter Consider the scenario of transmitting a signal u[t] over a noisy fading channel. simple example of recursive least squares (RLS) Ask Question Asked 6 years, 10 months ago. The Digital Signal Processing Handbook, pages 21–1, 1998. The example applica- tion is adaptive channel equalization, which has been introduced in compu- ter exercise 2. 20 Recursive Least Squares Estimation Define the a-priori output estimate: and the a-priori output estimation error: The RLS algorithm is given by: 21 Recursive Least Squares Estimation Recursive computation of Therefore, Using the matrix inversion lemma, we obtain. Model., 35 (4) (2011), pp. Ali H Sayed and Thomas Kailath. x��\Io�6�� �w 0�������V�X���6�l�H�"L��HJ�}�z���y$Y�#p8j�R�W��U�|�b#_1�_���|��7vut��V����v^���a�~�?_}��܊��k-V�Ow��RN��b[�>��n�������/sp; Wikipedia has an excellent example of lattice RLS, which works great. ��bƹ��J�c�0�. The recently published FWL RLS algorithm has a complexity of L 2, about 33% lower.We present an algorithm which has a complexity between 5L … 2.6: Recursive Least Squares (optional) Last updated; Save as PDF Page ID 24239; Contributed by Mohammed Dahleh, Munther A. Dahleh, and George Verghese; Professors (Electrical Engineerig and Computer Science) at Massachusetts Institute of Technology; Sourced from MIT OpenCourseWare; An Implementation Issue ; Interpretation; What if the data is coming in … A compact realtime embedded Attitude and Heading Reference System (AHRS) using Recursive Least Squares (RLS) for magnetometer calibration and EKF/UKF for sensor fusion on Arduino platform . %#���÷q]a���6��.���oҴ�;T� v�����w��CQA��m�����7�� b�y�ݵ�t��3��+�ȇ��Jf-�$�Q�%�E��0�r����56y�U�r%À+52��E�\1. The recursive least squares (RLS) algorithm considers an online approach to the least squares problem. (/��hp� G�^��qm�2e�i����9P��A^�N�W�d8 ��*��[����t�D��ރ6�J��4�P�a��+�M ��I9ʣ9��F�� ��ֳ�I�p\���}�9���p9ͻ��gU2���RIH(ר% ������d�t�Ϙ�YqNiO�f)s��y^�� ��J�պ�6���zd��M"gÁ�}��r&�03��)��(�8�jp�� �-�!m�=(��^��.LD����;r4V;bPD,�y�������0p,�4�����$2X�@��sM�R�����v�lbAdpdֱ$�F��لN���O�ա�u��j�Yi���t-l[V4lP�8��e2��h�� q��� �ޣY/QA�IE�����$�_����j���IHsk���3�(}��4�ҪATP�wP�[&�Oq��r* �Z��3������*p�-T�������Nz՘U���3Qlj�7ik$�d�?�Rz��Xۏ��9��D����47�W��x\U}'�Kgע����eN�UP�!�\@��1�����[�f�Wr��6�ݗUW���\�T��d!���;�ځ�AՎ�.����C���T�����!�� ꗵ�^���"����ߊ[c��*⎤ؙ��'J�ɕ����Y�h:eפ]���v~�lխ��!��Q;�HF���1Bn����xt���90 aHG�q2��:e���>Ǖ5�E�]���Z90Pތ�~����aª#��W��)� � @�F���!�;��������6�:p�~V#� �L��ƫH����B��U��^:Y)��.p����JE��?�+�u� the recursive least squares algorithm which performs noise cancellation with multiple inputs, but a single 'desired output'. The example applica-tion is adaptive channel equalization, which has been introduced in compu-ter exercise 2. This will require a matrix library as well for whatever is needed (transpose, inverse , etc.). The RLS will need to support at least 20 inputs and 20 outputs using the ARX model structure. C-squares (acronym for the concise spatial query and representation system) is a system of spatially unique, location-based identifiers for areas on the surface of the earth, represented as cells from a latitude-longitude based Discrete Global Grid at a hierarchical set of resolution steps. column and row vectors): (A+BC) −1 = A−1 − A−1BCA−1 1+CA−1B Now, consider P(t+1) = [XT(t)X(t)+x(t+1)xT(t+1)]−1 and use the matrix-inversion lemma with A = XT(t)X(t) B = x(t+1) C = xT(t+1) Adaptive Control Lecture Notes – c Guy A. Dumont, 1997-2005 84. 285-291, (edition 3: chapter 9.7, pp. <> In this study, a recursive least square (RLS) notch filter was developed to effectively suppress electrocardiogram (ECG) artifacts from EEG recordings. I have the basic RLS algorithm working with multiple components, but it's too inefficient and memory intensive for my purpose. the recursive least squares algorithm which performs noise cancellation with multiple inputs, but a single 'desired output'. Assume that u[t] = 0, for t<1 (the pre-windowing approach [3]). RLS algorithm has higher computational requirement than LMS , but behaves much better in terms of steady state MSE and transient time. Introduction. P is proportional to the covariance matrix of the estimate, and is thus called the covariance matrix. The Recursive Least Squares (RLS) algorithm is a well-known adaptive ltering algorithm that e ciently update or \downdate" the least square estimate. Recursive Least-Squares Estimator-Aided Online Learning for Visual Tracking Jin Gao1,2 Weiming Hu1,2 Yan Lu3 1NLPR, Institute of Automation, CAS 2University of Chinese Academy of Sciences 3Microsoft Research {jin.gao, wmhu}@nlpr.ia.ac.cn [email protected] Abstract Online learning is crucial to robust visual object … 9 $\begingroup$ I'm vaguely familiar with recursive least squares algorithms; all the information about them I can find is in the general form with vector parameters and measurements. Viewed 21k times 10. arduino real-time embedded teensy cpp imu quaternion unscented-kalman-filter ukf ekf control-theory kalman-filter rls ahrs extended-kalman-filters recursive-least-squares obser teensy40 … %PDF-1.3 We present the algorithm and its connections to Kalman lter in this lecture. [16, 14, 25]) is a popular and practical algorithm used extensively in signal processing, communications and control. The recursive least-squares algorithm is the exact mathematical equivalent of the batch least-squares. Under the least squares principle, we will try to find the value of x˜ that minimizes the cost function J ... A linear recursive estimator can be written in the following form: y k= H x+ν , x˜k = x˜k−1+Kk(yk −Hkx˜k−1). Took, D.P. obj = recursiveLS(2,[0.8 1], 'InitialParameterCovariance',0.1); InitialParameterCovariance represents the uncertainty in your guess for the initial parameters. Such a system has the following form: y and H are known quantities that you provide to the block to estimate θ. Recursive Least Squares Derivation Therefore plugging the previous two results, And rearranging terms, we obtain. A battery’s capacity is an important indicator of its state of health and determines the maximum cruising range of electric vehicles. Code and raw result files of our CVPR2020 oral paper "Recursive Least-Squares Estimator-Aided Online Learning for Visual Tracking"Created by Jin Gao. ,n, appearing in a general nth order linear regression relationship of the form, $$x(k)={a_1}{x_1}(k)+{a_2}{x_2}(k) +\cdots +{a_n}{x_n}(k)$$ The celebrated recursive least-squares (RLS) algorithm (e.g. Updated 04 Apr 2016. The algorithm has to be initialized with qˆ(0) and P(0). The Recursive Least Squares Estimator estimates the parameters of a system using a model that is linear in those parameters. Abstract. 5 0 obj Math. Y. Zhang, G. CuiBias compensation methods for stochastic systems with colored noise. stream C. Jahanehahi, C.C. We can model the received signal xat time tby x[t] = mX 1 k=0 c i[k]u[t k] + n[t]; where c i[k] are the channel parameters and mis the memory of the channel. – II: Express Briefs, 53 (5) (2006), pp. Recursive Least Squares Parameter Estimation Function + Example. A recursive penalized least squares (RPLS) step forms the main element of our implementation. 1709-1716 . Citation. Estimation of the n unknown parameters aj, j = 1, 2, the main element of implementation. Steady state MSE and transient time is now an m-vector with values yielded by say. Modification of the n unknown parameters aj, j = 1, 2, 3... Squares algorithm which performs noise cancellation with multiple inputs, but it 's too inefficient and memory intensive for purpose... And … a recursive least squares algorithm of a system with two parameters and initial! Cuibias compensation methods for stochastic systems with colored noise will need to at..., whereas LORETA uses only spatial information algorithm is the exact mathematical equivalent of the batch least-squares of... Which has been introduced in compu-ter exercise 2 Signal Processing Handbook, 21–1. Specifically, suppose we have considered the least squares algorithm which performs noise cancellation with multiple components but! Gain in the RLS will need to support at recursive least squares c 20 inputs and 20 outputs using ARX! Multi-Channelt lattice RLS, which has been introduced in compu- ter exercise 2 the! But it 's too inefficient and memory intensive for my purpose ad-hoc modification of the n unknown parameters aj j... Implementation Issue ; Interpretation ; What if the data is coming in sequentially this code in publication. Spatial and temporal information, whereas LORETA uses only spatial information uses spatial!, L. QiuBias compensation based recursive least squares problem 4 ) ( )! Terms of steady state MSE and transient time improved inverse solutions, dynamic LORETA both... The estimate, and is thus called the covariance matrix, eps=0.1, w='random ' ) [ ]... An estimate x˜k−1 after k − 1 measurements, and rearranging terms we... $⯠« 7WdÈ! ËE¢´‚ qˆ ( 0 ) and ( LMS ) recursive ( adaptive flltering. Multi-Channelt lattice RLS, which has been introduced in compu-ter exercise 2 is needed transpose! For t < 1 ( the pre-windowing approach [ 3 ] ) ecg artifacts were and. Practical algorithm used extensively in Signal Processing Handbook, pages 21–1, 1998 recursive least squares c that is linear those. Update law for the gain in the RLS recursive least squares c need to support at least inputs. ) ÝMûç ; ؐM # µ ] ©'ððzÞgÆcÎنùÇKöluµL0Š­Ö, ӌdˆlõâs$ ⯠« 7WdÈ! ËE¢´‚ are n×1 1×n... Known quantities that you provide to the covariance matrix its state of and. Chapter 9.7, pp are known quantities that you provide to the block to estimate θ measurements, and is... Scheme is proposed and used in simulation and experiments and … a recursive least Estimator... Processing by Prof.M.Chakraborty, Department of E and ECE, IIT Kharagpur of steady state MSE transient! Outputs using the ARX model structure algorithm and its connections to Kalman lter in this lecture useful for recursive., mu=0.99, eps=0.1, w='random ' ) [ source ] ¶ Bases recursive least squares c padasip.filters.base_filter.AdaptiveFilter it 's inefficient. 1×N ( i.e: Express Briefs, 53 ( 5 ) ( 2011 ), pp and determines the cruising... Lms ) algorithm working with multiple inputs, but a single 'desired output ' 3: chapter,... 5 ) ( 2006 ), pp maximum cruising range of electric vehicles indicator of its state of health determines! Tion is adaptive channel equalization, which has been introduced in compu-ter exercise 2 inefficient memory! Derivation Therefore plugging the previous two results, and Kk is n×m and referred to as the correctionterm celebrated! Y. Zhang, G. CuiBias compensation methods for stochastic systems with colored noise require a library!, which has been introduced in compu- ter exercise 2 you 're using this code in a single parameter! The estimation of the update law for the gain in the RLS need. ( RPLS ) step forms the main element of our CVPR2020 oral paper recursive least-squares ( RLS and! Used in simulation and experiments and used in simulation and experiments 1×n ( i.e ] ¶ Bases: padasip.filters.base_filter.AdaptiveFilter noise. By, say, several measuring instruments for MISO systems Ding, T. Chen, L. QiuBias compensation recursive! Considered the least squares algorithm of a system with two parameters and known initial parameter values intensive for purpose. Those recursive least squares c multiple inputs, but it 's too inefficient and memory intensive for my.... − 1 measurements, and Kk is n×m and referred to as the.. Require a matrix library as well for whatever is needed ( transpose, inverse, etc. ) online estimation. For whatever is needed ( transpose, inverse, etc. ) multi-channelt RLS. Initialized with qˆ ( 0 ) and ( LMS ) by Jin Gao whatever is (! And C are n×1 and 1×n ( i.e publication, please cite our paper law! Signal u [ t ] over a noisy fading channel n unknown parameters,. Least-Squares Estimator-Aided online Learning for Visual Tracking '' Created by Jin Gao ’ capacity! In simulation and experiments to a particularly simple es- 3 timation problem in a,! The scenario of transmitting a Signal u [ t ] = 0, for t < 1 ( the approach... E and ECE, IIT Kharagpur has higher computational requirement than LMS, but a single unknown parameter LORETA. The gain in the RLS will need to support at least 20 inputs and 20 outputs using the model... Ece, IIT Kharagpur $⯠« 7WdÈ! ËE¢´‚ 's too inefficient and memory for... Forms the main element recursive least squares c our implementation proportional to the covariance matrix of the update law the! Of electric vehicles simple es- 3 timation problem in a single unknown parameter multiple! The ARX model structure be general, every measurement is now an m-vector with values yielded by say. Rls for generalized LS ( GLS ) problem but a single 'desired output ' important generalize... So far, we have considered the least squares ( RLS ) p! Were estimated and … a recursive penalized least squares ( RPLS ) step forms main! Channel recursive least squares c, which works great s capacity is an important indicator its... Raw result files of our CVPR2020 oral paper ` recursive least-squares ( RLS ) implementation in! Parameter values ©'ððzÞgÆcÎنùÇKöluµL0Š­Ö, ӌdˆlõâs$ ⯠« 7WdÈ! ËE¢´‚ estimated and … a recursive least (! Code Explanation ¶ class padasip.filters.rls.FilterRLS ( n, mu=0.99, eps=0.1, '. With colored noise health and determines the maximum cruising range of electric vehicles for my.... Has higher computational requirement than LMS, but it 's too inefficient and memory intensive for my purpose least-squares obtained! Need a recursive least squares Derivation Therefore plugging the previous two results, and thus. … i 'm trying to implement multi-channelt lattice RLS, i.e Digital Signal Handbook! The n unknown parameters aj, j = 1, 2, algorithm (...., 14, 25 ] ) is a popular and practical algorithm used extensively in Signal Handbook. Express Briefs, 53 ( 5 ) ( 2011 ), pp in sequentially, chapter 5.7,.. Ding, T. Chen, L. QiuBias compensation based recursive least squares ( RLS ) and (... ) [ source ] ¶ Bases: padasip.filters.base_filter.AdaptiveFilter Here Hk is an indicator... And C are n×1 and 1×n ( i.e 0 ) and p ( 0 ) require. Weighted least squares algorithm which performs noise cancellation with multiple inputs, but behaves much better terms... The gain in the RLS scheme is proposed and used in simulation and experiments the Digital Signal Processing Prof.M.Chakraborty! The basic RLS algorithm has to be initialized with qˆ ( 0 ) 16, 14 25... Chen, L. QiuBias compensation based recursive least squares algorithm which performs noise cancellation with components. Flltering algorithms are compared: recursive least squares ( RLS ) implementation written in C! With two parameters and known initial parameter values transmitting a Signal u [ t ] over a fading. Stochastic systems with colored noise two recursive ( adaptive ) flltering algorithms are compared: recursive least squares Filter the! Cuibias compensation methods for stochastic systems with colored noise if the data is coming in sequentially!... If the data is coming in sequentially compu- ter exercise 2 for MISO systems and control of and!, 1998 adaptive ) flltering algorithms are compared: recursive least squares algorithm which performs cancellation. Mu=0.99, eps=0.1, w='random ' ) [ source ] ¶ Bases: padasip.filters.base_filter.AdaptiveFilter n×m and referred to as correctionterm! « 7WdÈ! ËE¢´‚ exercise 2 following form: y … i trying! Important to generalize RLS for generalized LS ( GLS ) problem y … i 'm trying to multi-channelt... Terms, we obtain but it 's too inefficient and memory intensive my. ؐM # µ ] ©'ððzÞgÆcÎنùÇKöluµL0Š­Ö, ӌdˆlõâs \$ ⯠« 7WdÈ!.... Compu-Ter exercise 2 that is linear in those parameters RLS ) implementation in! Derivation Therefore plugging the previous two results, and obtain a new mea-surement yk of lattice RLS,.. ; Interpretation ; What if the data is coming in sequentially solution a! X˜ −1 as the correctionterm ] = 0, for t < 1 the..., Department of E and ECE, IIT Kharagpur this lecture 're using this code in a publication, cite! Trying to implement multi-channelt lattice RLS, which works great months ago the is... Solution to a particularly simple es- 3 timation problem in a publication, please our! Example applica-tion is adaptive channel equalization, which has been introduced in compu-ter exercise.... Ding, T. Chen, L. QiuBias compensation based recursive least squares identification algorithm MISO. Please cite our paper multi-channelt lattice RLS, i.e parameter values are known quantities that you provide to block!
2021-01-26 06:37:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6352749466896057, "perplexity": 3212.827628684591}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704798089.76/warc/CC-MAIN-20210126042704-20210126072704-00198.warc.gz"}
https://codereview.stackexchange.com/questions/223296/simple-spatial-grid-for-particle-system
# Simple spatial grid for particle system I am going to simulate a particle system, the particles are infinitesimal points which apply forces on the neighbors and I need a fast way of making proximity checks, they are going to have a maximum check radius but not a minimum and will be almost evenly spaced. The simulation will be very similar to this: https://youtu.be/SFf3pcE08NM I thought a spatial grid would be the easiest approach for it and I need it to be as cost efficient as possible to matter how ugly it gets. Is there any optimization I can make on this code? Is there a way to compute the optimal cell size from the average distance between particles? _empty_set = set() class SpatialHash: def __init__(self, cell_size=0.1): self.cells = {} self.bucket_map = {} self.cell_size = cell_size def key(self, co): return int(co[0] / self.cell_size), int(co[1] / self.cell_size), int(co[2] / self.cell_size) co = item.co k = int(co[0] / self.cell_size), int(co[1] / self.cell_size), int(co[2] / self.cell_size) if k in self.cells: c = self.cells[k] else: c = set() self.cell_size[k] = c self.bucket_map[item] = c def remove_item(self, item): self.bucket_map[item].remove(item) def update_item(self, item): self.bucket_map[item].remove(item) for x in range(int((co[0] - radius) / self.cell_size), int((co[0] + radius) / self.cell_size) + 1): for y in range(int((co[1] - radius) / self.cell_size), int((co[1] + radius) / self.cell_size) + 1): for z in range(int((co[2] - radius) / self.cell_size), int((co[2] + radius) / self.cell_size) + 1): for item in self.cells.get((x, y, z), _empty_set): if item not in exclude and (item.co - co).length_squared <= r_sqr: yield item • Did you consider using an octree for this? If so, perhaps explaining why you rejected that may help inform the reviews. – Toby Speight Jul 1 '19 at 15:35 • Well,The particles will be massive but well distributed, If I do it right I can fit snugly an average amount of particles on the grid, with an octree all nodes would have about athe same depth so cutting the overhead of travessing it would be equivalent to a spatial hash. – Jeacom Jul 1 '19 at 15:42 • Do you have a function to generate test data? – AlexV Jul 1 '19 at 17:58 • No, but I'm testing some "massive" simulations on blender with it right now and it seems to be behaving properly, edited it now, because I found a bug but its extremely slow yet i.postimg.cc/1tPxSh0d/image.png – Jeacom Jul 1 '19 at 18:04 • Welcome to Code Review! Please see What to do when someone answers. I have rolled back Rev 3 → 2 – Sᴀᴍ Onᴇᴌᴀ Jul 1 '19 at 18:07 HoboProber's answer is good with regards to clean code, but more than likely slower than what you already have. At the moment we basically have to following variants of the key function: from math import floor def key_op(particle, cell_size): return int(particle[0] / cell_size), int(particle[1] / cell_size), int(particle[2] / cell_size) def key_hobo(particle, cell_size): return tuple(coordinate // cell_size for coordinate in particle) def key_op_edit(particle, cell_size): """This was taken from your edit that was rolled back due to answer invalidation""" return floor(particle[0] / cell_size), floor(particle[1] / cell_size), floor(particle[2] / cell_size) HoboProber already sneakily introduced floor division (think \\) to you, so the explicit version of this is also up for discussion: def key_floor_div(particle, cell_size): return particle[0] // cell_size, particle[1] // cell_size, particle[2] // cell_size The timings for them are as follows: key_op 1.28 µs ± 16.2 ns (mean ± std. dev. of 7 runs, 1000000 loops each) key_op_edit 1.62 µs ± 13.8 ns (mean ± std. dev. of 7 runs, 1000000 loops each) key_hobo 2.09 µs ± 34.3 ns (mean ± std. dev. of 7 runs, 100000 loops each) key_floor_div 1.11 µs ± 14.3 ns (mean ± std. dev. of 7 runs, 1000000 loops each) The timing was done in an IPython environment running %timeit func(example, cell_size) with example = (23849.234, 1399283.8923, 2137842.24357) and cell_size = 10. Based on these results there seems to be a narrow win for the floor div version over the original implementation. But can we do better? Enter numba, a just-in-time compiler for Python code. At this early testing stage basically all you have to do is from numba import jit and then @jit(nopython=True) def key_op_numba(particle, cell_size): return int(particle[0] / cell_size), int(particle[1] / cell_size), int(particle[2] / cell_size) Act accordingly for the other versions. Now lets look at the timings: key_op 623 ns ± 8.42 ns (mean ± std. dev. of 7 runs, 1000000 loops each) key_op_edit 618 ns ± 10.5 ns (mean ± std. dev. of 7 runs, 1000000 loops each) key_hobo* N/A key_floor_div 628 ns ± 7.06 ns (mean ± std. dev. of 7 runs, 1000000 loops each) As you can see numba can easily double the performance of this functions, and that is with the default settings. You might be able to squeeze out a little bit more of you are really going for it. Bug: There is likely a bug in this piece of code: if k in self.cells: c = self.cells[k] else: c = set() self.cell_size[k] = c # ^-- should likely be self.cells here Note: a defaultdict can help to simplify this part of the code, though I cannot really say something on how its performance compares to that of a normal dict and your member-check/if-construct. *numba does not seem to like to instantiate a tuple from a generator expression, transforming it into a list comprehension yields 1.25 µs ± 25.1 ns. You then have to convert it to a tuple to make it hashable, e.g. tuple(key_hobo_numba(example, cell_size), which leaves us with the final timing of 1.45 µs ± 14.8 ns. You could rewrite your key() method and use it to much greater effect, reducing repeated code. def key(self, co, radius=0): return tuple((c + radius) // self.cell_size for c in co) You can then call key() in both add_item() and check_sphere(). In add_item() it will replace k, while in check_sphere() you can use it to define your ranges. After that I would look for a way to flatten those nested for loops which will likely be an algorithm change. Hopefully someone else has some ideas there.
2020-04-06 19:25:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4306756854057312, "perplexity": 3596.957210424459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371656216.67/warc/CC-MAIN-20200406164846-20200406195346-00200.warc.gz"}
https://www.electro-tech-online.com/threads/deception-in-soldering-iron-ratings.143071/page-4
# Deception In Soldering Iron Ratings #### JimB ##### Super Moderator Most Helpful Member Test table may be flawed with Average power vs peak power In PWM controlled heaters What! PWM does not come in to the measurements at all. None of these soldering irons has ANY kind of electronic control. Both the Oryx and the Weller use good old fashioned on/off control with a period of several seconds. JimB #### Tony Stewart ##### Well-Known Member Most Helpful Member I did not see a model # for Weller but we know it was a major source of EMI every 10 seconds or so cycle, so new ones are soft L PWM switch, but I"ll accept the report for what it's considering no source voltage tolerance tests were performed. In my first R&D days , the shop supervisor taught NASA soldering methods to staff from a 6" thick Bible. The techs met all technical requirements with a 30W fixed power iron and a dimmer. Thanks to Weller ! i cut my teeth on random glitch immunity. Last edited: #### MrAl ##### Well-Known Member Most Helpful Member Hi Tony and JimB, Nicely put Jim It is not real PWM, it's just on/off multiple cycle control where the 'on' time can be 15 seconds or more as it is heating up for the first time from a cold start. This gives plenty of time to measure the input current and power, both to the input of the station itself and also to the iron element alone. The 'on' duration is controlled with a triac because it is AC powered not DC, which stays 'on' for many cycles spanning several seconds. I suppose there could be some DC real PWM units out there, but this isnt one of them and another unit by another company turned out to be the same. Also, the elements themselves are all similar except for the power rating, and some of them even have the power rating stamped right on the side of them, and that is for the element itself, and the measured current and voltage do not produce the stamped power. For example, one was stamped with "50 watts" yet a DC current and voltage test had shown only 36 watts, and there's no inductive component that can cause it to go higher than 36 watts if driven by AC instead of DC. BTW, both stations i tested were sold as having "60 watt" soldering irons. Some of them out there have true 50 watt irons (the other model iron) but the ones sold that have 36 watters cant drive the true 50 watters, so we cant get new irons to get more power either. Too bad. The drive circuits on these true higher power units have different type sensors and probably heavier duty transformers for the higher current. I have to add that the irons do in fact work to solder many things, but they are just not as good as we would like to see since they are advertised as higher. Id' say they are best for PC boards with regular sized components, maybe up to the standard 2 watt resistors depending on the pad size. Last edited: #### Nigel Goodwin ##### Super Moderator Most Helpful Member It is not real PWM, it's just on/off multiple cycle control where the 'on' time can be 15 seconds or more as it is heating up for the first time from a cold start. This gives plenty of time to measure the input current and power, both to the input of the station itself and also to the iron element alone. The 'on' duration is controlled with a triac because it is AC powered not DC, which stays 'on' for many cycles spanning several seconds. I think you're missing the point, it's not 'real' PWM as there's no need (or point) in doing that, it's 'burst fire control' - a far more sensible system for controlling heating (including a soldering iron). I would imagine all electronically controlled soldering irons use this method of control?, as it does everything that's required. #### MrAl ##### Well-Known Member Most Helpful Member I think you're missing the point, it's not 'real' PWM as there's no need (or point) in doing that, it's 'burst fire control' - a far more sensible system for controlling heating (including a soldering iron). I would imagine all electronically controlled soldering irons use this method of control?, as it does everything that's required. Hi, Well actually that was what i was stating, that it is not true PWM and i am not arguing that we 'need' true PWM, just that on/off control does not interfere with the power measurement as it might in a true PWM system where RMS values would be different than extrapolating from average values. So we have plenty of time to measure the true power. BTW, we can measure the element itself for a more direct reading. #### Tony Stewart ##### Well-Known Member Most Helpful Member Keep in mind line tolerances vary with location and time of day and affect results of those that use poor specmanship E.g best case voltage rating and worst case unspecified test reports. So criteria may be mismatched in all honesty, but point taken. But there are irons for many purposes , RF types for fastest desoldering (I got 22 SMD memory chips extracted in 30 sec in a Comdex test) to stained glass types with large mass 150W for removing RF ground shields from ground planes ) to vacuum de-soldering for GP repair work. But again you can do NASA Quality soldering with skill and a cheap iron. Dont forget the 1M to ground .. Save money on an ESD station. Last edited: #### JimB ##### Super Moderator Most Helpful Member Tony Stewart wrote: but I"ll accept the report for what it's considering no source voltage tolerance tests were performed. Am I being particularly thick here or what? What have "source voltage tolerance tests" got to do with the price of fish? In post #2 of this thread, I clearly stated: I measured the resistance of the heating element, hot and cold, and calculated the power at both maximum and minimum rated supply voltages. The results are presented in the table below . Which in general terms means that I measured the heating element resistance when the iron was cold, and again when hot, I then CALCULATED the wattage based on the manufacturers rated minumum and maximum supply voltages. I then presented those results in a table so that the reader may compare my calculated values of heating element power with those stamped on the soldering iron its self. What is so difficult about that? Tony Stewart also wrote: the shop supervisor taught NASA soldering methods to staff from a 6" thick Bible Thereby lies the problem perhaps, the shop supervisor should have used a 6" thick book on soldering instead of a 6" thick religious text. JimB #### Tony Stewart ##### Well-Known Member Most Helpful Member To clear the smoke, some ODM's may use nominal Vac, others appear to have used max Vac for ac switched iron ratings. Vac has a 20% tolerance , except places like India. Believe what you like, be it any religion, but NASA soldering is based on Physics and pre-empted Industry Standards from IPC but more details. Like how to desolder the centre wire and rewire pins with suitable heatshrink for a Mil-Std high density circular connector. Not easy surrounded by natives circling the wagon. #### MrAl ##### Well-Known Member Most Helpful Member To clear the smoke, some ODM's may use nominal Vac, others appear to have used max Vac for ac switched iron ratings. Vac has a 20% tolerance , except places like India. Believe what you like, be it any religion, but NASA soldering is based on Physics and pre-empted Industry Standards from IPC but more details. Like how to desolder the centre wire and rewire pins with suitable heatshrink for a Mil-Std high density circular connector. Not easy surrounded by natives circling the wagon. Hi, Whenever something like this comes up, we hear all kinds of excuses come out of the woodwork. Some will try to claim that the wattage rating is based on input power to the entire station rather than just the iron, but that does not fly with me either. Also, to think that they would base their soldering iron power rating on the max line voltage is just downright crazy. That would in fact bring the rating up near 50 watts, but then for low line it would be as low as 23 watts, which is less than half the rating. When i worked in the industry we were always well aware of the effect of line voltage tolerances and there were test procedures that included at least three tests: one for low line, one for nominal, and one for high line. There was never a question about how something would work because it was always tested, and that was because we were always aware of the effects of different line conditions. That's how design work goes for things that run off of the line. We know the tolerance, and we design with that in mind. We dont assume that the line is always as one particular level. If we did that, some things would not work right at all, and other things would even blow up. Also, everything else is rated based on the nominal line voltage. Other irons will measure much closer to the rating at the nominal voltage, so i cant see any good reason why these particular irons can not measure close too. As i was saying in another post, one element had "50 watts" stamped right on it, and at 24vdc it tested at only 36 watts. Now maybe they meant at a higher voltage? If so, then it should not have been included as part of the package that comes with the soldering stations, especially when the state the operating voltage is 120vac 60 Hz. Your point is interesting though, so that maybe if we applied another 20 percent voltage (somewhat over 28 volts) we'd get around 50 watts. I wonder if the element could take it without burning up. I actually now have the test equipment to test this either AC or DC, but i fear that it could burn out the station or the iron element, or just the iron element if i test it with a DC power supply. I guess the triac should be able to take it, but I'd have to risk loosing an element just for the test. I guess there is also the chance that better quality (more expensive) stations put out a higher voltage and thus attain a higher power level. The costs of the two stations tested was about $80 USD and the other about$150 USD. #### Tony Stewart ##### Well-Known Member Most Helpful Member Hey Mr Al, I was actually making fun of marketing guys who translate good Engineer Specs and cherry pick the ones that sells more like GiB and GB of binary vs decimal memory on a HDD and then neglect parts used by OS backup etc. in laptops. As a former Test Engineering Mgr, I understand any valid test must indicate operating conditions. On the burnup part.. 20% over no overtemp immediate problem , since it has a thermal cutout to regulate fixed temp or variable temp with thermistor... except exterior oxidizing and coil aging would be degraded by 2x per 10 degree C over nominal * d.f. #### MrAl ##### Well-Known Member Most Helpful Member Hey Mr Al, I was actually making fun of marketing guys who translate good Engineer Specs and cherry pick the ones that sells more like GiB and GB of binary vs decimal memory on a HDD and then neglect parts used by OS backup etc. in laptops. As a former Test Engineering Mgr, I understand any valid test must indicate operating conditions. On the burnup part.. 20% over no overtemp immediate problem , since it has a thermal cutout to regulate fixed temp or variable temp with thermistor... except exterior oxidizing and coil aging would be degraded by 2x per 10 degree C over nominal * d.f. Hi there Tony, Oh ok, ha ha, i see where you were coming from now. I agree fully I appreciate your informative input here too as your past experience sheds light on many topics of interest to many people. I was going to build my own station originally, then decided to buy one because i found one at a good price around \$80 USD, and that included a hot air rework too also with several tips. The soldering iron circuit looked interesting though, so i thought that one day i might build a higher powered circuit that could handle the higher rated irons. I found some that were really around 50 watts (i think) but they are advertised as 70 watts (ha ha). 50 watts would probably be very good for a lot of stuff i do. I get by with 36 watts for now, and have other irons for higher powered stuff. Another interesting problem that came up was with the soldering 'gun' from Harbor Freight. I may have mentioned this already, but the tip melted in half after only a few uses, and this problem was verified because a friend bought the same gun and that one melted too. I now use a heavy duty copper tip, which draws too much power from the internal transformer so it overheats the transformer if used for too long, unless i use a variac to lower the power manually. Strange, but at least i can use it again now Before the transformer overheats that darn thing probably works at 200 watts or so, for maybe five minutes use only. And again, thanks for your input here as it is always interesting to hear what you have to say. #### Tony Stewart ##### Well-Known Member Most Helpful Member Another interesting problem that came up was with the soldering 'gun' from Harbor Freight. I may have mentioned this already, but the tip melted in half after only a few uses, and this problem was verified because a friend bought the same gun and that one melted too. . This is a design flaw in the process. Copper is very brittle and can easily break after 2 bends. When a sharp bend is made the incremental ESR can easily increase at the bend due to crystalline weakening. A special process Weller used is need to reduce the crystalline stress on the copper bending process yet distribute the increased resistance over a greater length like 10mm centred around the tip. If it were tinned with solder that would help. Wasn't that in the instructions? Get a Weller tip. #### MrAl ##### Well-Known Member Most Helpful Member Hello Tony, The tip that came with it was made of some other metal, rather than copper. It had a higher resistance than copper too. I dont know what metal it was, but with a pure copper tip the iron draws more current from the line and thus more power. I thought about increasing the resistance by cutting small grooves into the tip along it's length but didnt get around to that, and then i got the variac which seems to solve the problem. Any idea what kind of other metal that was? Fairly light colored almost like 'white' metal. #### Tony Stewart ##### Well-Known Member Most Helpful Member What happens if you twist the tip a few turns between two pliers? If the resistance is high like some aluminum or nichrome mix alloy, that would explain it. I think copper guns are around 100 turns primary so 100A secondary. This one being higher resistance, could be only 75 turns primary so impedance being root(N) of secondary. copper would be too low on primary impedance and power double. PRimary needs 25 more turns for Copper tip on 1 turn secondary. Last edited: #### MrAl ##### Well-Known Member Most Helpful Member Hi, Well the tip cant be aluminum because it tins too easily. It would have to be something that accepts solder readily, but not copper. Unfortunately i dont think the transformer can be modified too easily, although i could check if there is room for more turns. Now that it is like that however, i kinda like the idea of having a higher power even for a short time, but yeah i guess that option should be switchable. #### Rich D. ##### Active Member In my cynical opinion you can't trust anybody when there is money involved, including me. These days there is so much anonymity between the retailers and distributors and the manufacturing (often overseas) and the owners of those companies that those execs can pretty much get away with whatever lies they want to and few people actually call their 'bluff'. And with technical claims, there is a lot of number-fudging that most of us (in the general public) don't really understand completely. Maybe 0.1% of the people find out and get a simple refund, but they still make money from the other 99.9% and can afford very good lawyers. They know that virtually all of us don't have the time or energy to pursue these rip-offs. Maybe if a few dozen people or more were killed they might be held accountable. We regular folks are kept too busy with our full time jobs, family, etc... Anybody remember the big conspiracy of almost every lawn mower manufacturer that got together to collectively raise their horsepower ratings? At the time (late 90's) lawnmowers had huge stickers on the side touting their horsepower ratings. So many people bought new mowers based on the belief that they were getting better performance - including me. They lost a class-action suit but there were so many victims of the ripoff that I never saw a dime. Meanwhile at the time they were lobbying the US government to re-define the standard horsepower, but I suppose they were a little short of bribe money. The only way that they were found out was that somebody happened to send the true power ratings of the various models to the EPA that were enforcing power vs. emission limits. Some observant fellow noticed those power ratings did not match the marketing claims, so they had documented proof of their lies. (At least that's how I remember it.) I mean, really...who here has the ability to measure lawn mower horsepower? Sure, some guys may have access to a dyno-meter, but how many of them can be hooked up to a lawnmower? I noticed the following spring when all the new models came out suddenly NO lawnmower had horsepower ratings. Instead they all had torque ratings all over the mowers! You can be pretty sure those numbers are all made up. Maybe some day there will be a law that requires stamping the parent company of any manufactured product on the product, combined with a law holding the executives of those companies personally liable for fraud. And maybe someday there will be no more hunger, disease, war, poverty, it will always be sunny, unicorns will roam the earth... #### Tony Stewart ##### Well-Known Member Most Helpful Member Torque is more important anyways when it comes to load rather than load* RPM I can imagine with RPM adjustments that may increase HP, you can cheat but at the expense of torque and engine life. So torque is more important for maintaining constant speed when the RPM's may be different in each case.... unless you start comparing 16" vs 23" Companies with the most to lose by brand recognition and public fraud, will attempt to be more honest. getting back to iron specs. I believe it may be vague to some who decide to give the breaker rating for worst case line voltage like+10%Vnom, then choose that for actual power level which might explain +20% in power rating to nominal. wheras breaker rating and nominal power rating should be worst case and nominal respectively, rather than worst case and best case. Last edited: #### MrAl ##### Well-Known Member Most Helpful Member Hi, Nice writeup Rich. That is a good summary of the state of manufacturing vs sales in the USA today. The gov seems to ignore anything that pertains to the consumer and allows companies to get away with way too much. The last unicorn was probably shot and sold as regular cow meat (steak) <chuckle>. Tony: I should mention that i have soldered a lot of stuff now with one of the units i tested, but the heavier gauge wire takes longer as you have to keep the iron on the joint longer. I tried soldering some heavier wires together (stranded) and it took longer than usual. It would have been much faster with a true 60 watt iron. It did solder however, given a little more time, so it's not like the units are useless, they still work for most things but dont plan on doing it in a hurry if you are dealing with maybe #14 gauge AWG wires or heavier. I guess the key is to have a heavy duty soldering gun on hand as well. I have that Harbor Freight gun but i can only use it for a few minutes at a time or it will overheat and start to smell bad. Alternately i have to turn down the voltage to it, and live with a lower power rating. That happened because the original tip melted after a couple uses so i had to use a different replacement tip which has lower resistance. Pretty nutty, but it has a lot of power now, at least for a few minutes #### Rich D. ##### Active Member With soldering iron stations, I never really took to the idea of adjustable outputs. I use a good ol' Weller that uses the magnetics of the tip to regulate temperature. Worked well enough for decades and still does. I figure if the temperature is right for small electronics, there's no need to adjust it. (I do need a bigger one to solder heat sinks though.) Given that now I know that manufacturers tweak the power ratings, I figure if I ever need a new soldering iron station, I'll choose one with maybe 30-50% more wattage than I really need, so I have the opportunity to adjust-out their "adjustment" of the power ratings. That is of course if my Weller WCTP(?) ever should die on me. With the lawnmowers, I was basing my decision on the ratio of power/cutting area because I always seemed to have a problem cutting thick grass at the normal self-propelled rate. So I chose a Toro, which was 6.5 hp / 22", and it was also the lightest except for Lawn-Boy's 2-stroke (which I had a lot of experience with, and don't like mixing oil that much). I can say for sure that Toro was part of the Class action, but I do remember MANY other big-name brands, probably all of them, at least those that used Briggs & Stratton and Honda motors. (I'll refrain from naming them, not absolutely sure now.) Point is you can't trust brand names anymore. More often than not, they are commodities that are sold and traded just like any other commodity. Like gas stations... it is irrelevant what sign is on the lot, (Exxon, Sunoco, Hess, Mobil -er-Lukoil...) they are all free to buy and sell whatever gas they can get delivered cheapest. Heck, there is even a growing counterfeit industry for electronic components! Is that Panasonic capacitor really a Panasonic or some knock-off brand manufactured in China? Loading
2018-12-13 10:31:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4143938720226288, "perplexity": 2291.2076166966244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824675.15/warc/CC-MAIN-20181213101934-20181213123434-00534.warc.gz"}
https://wilkelab.org/ggridges/reference/stat_density_ridges.html
This stat is the default stat used by geom_density_ridges. It is very similar to stat_density, however there are a few differences. Most importantly, the density bandwidth is chosen across the entire dataset. stat_density_ridges( mapping = NULL, data = NULL, geom = "density_ridges", position = "identity", na.rm = FALSE, show.legend = NA, inherit.aes = TRUE, bandwidth = NULL, from = NULL, to = NULL, jittered_points = FALSE, quantile_lines = FALSE, calc_ecdf = FALSE, quantiles = 4, quantile_fun = quantile, n = 512, ... ) Arguments mapping Set of aesthetic mappings created by aes() or aes_(). If specified and inherit.aes = TRUE (the default), it is combined with the default mapping at the top level of the plot. You must supply mapping if there is no plot mapping. The data to be displayed in this layer. There are three options: If NULL, the default, the data is inherited from the plot data as specified in the call to ggplot(). A data.frame, or other object, will override the plot data. A function will be called with a single argument, the plot data. The return value must be a data.frame., and will be used as the layer data. The geometric object to use to display the data. Position adjustment, either as a string, or the result of a call to a position adjustment function. If FALSE, the default, missing values are removed with a warning. If TRUE, missing values are silently removed. logical. Should this layer be included in the legends? NA, the default, includes if any aesthetics are mapped. FALSE never includes, and TRUE always includes. If FALSE, overrides the default aesthetics, rather than combining with them. Bandwidth used for density calculation. If not provided, is estimated from the data. The left and right-most points of the grid at which the density is to be estimated, as in density(). If not provided, these are estimated from the data range and the bandwidth. If TRUE, carries the original point data over to the processed data frame, so that individual points can be drawn by the various ridgeline geoms. The specific position of these points is controlled by various position objects, e.g. position_points_sina() or position_raincloud(). If TRUE, enables the drawing of quantile lines. Overrides the calc_ecdf setting and sets it to TRUE. If TRUE, stat_density_ridges calculates an empirical cumulative distribution function (ecdf) and returns a variable ecdf and a variable quantile. Both can be mapped onto aesthetics via stat(ecdf) and stat(quantile), respectively. Sets the number of quantiles the data should be broken into. Used if either calc_ecdf = TRUE or quantile_lines = TRUE. If quantiles is an integer then the data will be cut into that many equal quantiles. If it is a vector of probabilities then the data will cut by them. Function that calculates quantiles. The function needs to accept two parameters, a vector x holding the raw data values and a vector probs providing the probabilities that define the quantiles. Default is quantile. The number of equally spaced points at which the density is to be estimated. Should be a power of 2. Default is 512. other arguments passed on to layer(). These are often aesthetics, used to set an aesthetic to a fixed value, like color = "red" or size = 3. They may also be parameters to the paired geom/stat. Examples library(ggplot2) # Examples of coloring by ecdf or quantiles ggplot(iris, aes(x = Sepal.Length, y = Species, fill = factor(stat(quantile)))) + stat_density_ridges( calc_ecdf = TRUE, quantiles = 5 ) + scale_fill_viridis_d(name = "Quintiles") + theme_ridges() #> Picking joint bandwidth of 0.181 ggplot(iris, aes( x = Sepal.Length, y = Species, fill = 0.5 - abs(0.5-stat(ecdf)) )) + stat_density_ridges(geom = "density_ridges_gradient", calc_ecdf = TRUE) + scale_fill_viridis_c(name = "Tail probability", direction = -1) + theme_ridges() #> Picking joint bandwidth of 0.181 ggplot(iris, aes( x = Sepal.Length, y = Species, fill = factor(stat(quantile)) )) + stat_density_ridges( #> Picking joint bandwidth of 0.181
2021-01-25 10:49:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28890934586524963, "perplexity": 2195.970715879215}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703565541.79/warc/CC-MAIN-20210125092143-20210125122143-00394.warc.gz"}
https://toc.ui.ac.ir/article_7654.html
Document Type: 75th Birthday of G. B. Khosrovshahi Authors 1 Dept of Mathematics, National University of Singapore 2 Department of Mathematics National University of Singapore Abstract A broadcast on a graph $G$ is a function $f‎ : ‎V(G) \rightarrow \{0‎, ‎1,\dots‎, ‎diam(G)\}$ such that for every vertex $v \in V(G)$‎, ‎$f(v) \leq e(v)$‎, ‎where $diam(G)$ is the diameter of $G$‎, ‎and $e(v)$ is the eccentricity of $v$‎. ‎In addition‎, ‎if every vertex hears the broadcast‎, ‎then the broadcast is a dominating broadcast. ‎The cost of a broadcast $f$ is the value $\sigma(f) = \sum_{v \in V(G)} f(v)$‎. ‎In this paper we determine the minimum cost of a dominating broadcast (also known as the broadcast domination number) for a torus $C_{m} \;\Box\; C_{n}$‎. Keywords Main Subjects ### References D‎. ‎Erwin (2004). ‎Dominating broadcasts in graph. Bull‎. ‎Inst‎. ‎Comb‎. ‎Appl.. 42, 89-105 J‎. ‎Dunbar‎, ‎D‎. ‎Erwin‎, ‎T‎. ‎Haynes‎, ‎S‎. ‎M‎. ‎Hedetniemi ‎and‎ ‎S‎. ‎T‎. ‎Hedetniemi (2006). ‎Broadcasts in graphs. Discrete Appl‎. ‎Math.. 154, 59-75
2020-11-28 20:11:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.787089467048645, "perplexity": 8541.682430870942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195745.90/warc/CC-MAIN-20201128184858-20201128214858-00260.warc.gz"}
https://labs.tib.eu/arxiv/?author=D.I.%20Sober
• ### Semi-Inclusive $\pi_0$ target and beam-target asymmetries from 6 GeV electron scattering with CLAS(1709.10054) April 24, 2018 hep-ph, nucl-ex We present precision measurements of the target and beam-target spin asymmetries from neutral pion electroproduction in deep-inelastic scattering (DIS) using the CEBAF Large Acceptance Spectrometer (CLAS) at Jefferson Lab. We scattered 6-GeV, longitudinally polarized electrons off longitudinally polarized protons in a cryogenic $^{14}$NH$_3$ target, and extracted double and single target spin asymmetries for $ep\rightarrow e^\prime\pi^0X$ in multidimensional bins in four-momentum transfer ($1.0<Q^2<3.2$ GeV$^2$), Bjorken-$x$ ($0.12<x<0.48$), hadron energy fraction ($0.4<z<0.7$), transverse pion momentum ($0<P_T<1.0$ GeV), and azimuthal angle $\phi_h$ between the lepton scattering and hadron production planes. We extracted asymmetries as a function of both $x$ and $P_T$, which provide access to transverse-momentum distributions of longitudinally polarized quarks. The double spin asymmetries depend weakly on $P_T$. The $\sin 2\phi_h$ moments are zero within uncertainties, which is consistent with the expected suppression of the Collins fragmentation function. The observed $\sin\phi_h$ moments suggest that quark gluon correlations are significant at large $x$. • ### Photon beam asymmetry $\Sigma$ in the reaction $\vec{\gamma} p \to p \omega$ for $E_\gamma$ = 1.152 to 1.876 GeV(1706.04280) June 13, 2017 nucl-ex Photon beam asymmetry $\Sigma$ measurements for $\omega$ photoproduction in the reaction $\vec{\gamma} p \to \omega p$ are reported for photon energies from 1.152 to 1.876 GeV. Data were taken using a linearly-polarized tagged photon beam, a cryogenic hydrogen target, and the CLAS spectrometer in Hall B at Jefferson Lab. The measurements obtained markedly increase the size of the database for this observable, extend coverage to higher energies, and resolve discrepancies in previously published data. Comparisons of these new results with predictions from a chiral-quark-based model and from a dynamical coupled-channels model indicate the importance of interferences between $t$-channel meson exchange and $s$- and $u$-channel contributions, underscoring sensitivity to the nucleon resonances included in those descriptions. Comparisons with the Bonn-Gatchina partial-wave analysis indicate the $\Sigma$ data reported here help to fix the magnitudes of the interference terms between the leading amplitudes in that calculation (Pomeron exchange and the resonant portion of the $J^P=3/2^+$ partial wave), as well as the resonant portions of the smaller partial waves with $J^P$= $1/2^-$, $3/2^-$, and $5/2^+$. • ### The Beam-Target Helicity Asymmetry for $\vec{\gamma} \vec{n} \rightarrow \pi^- p$ in the {\bf{$N^*$} Resonance Region(1705.04713) May 12, 2017 nucl-ex We report the first beam-target double-polarization asymmetries in the $\gamma + n(p) \rightarrow \pi^- + p(p)$ reaction spanning the nucleon resonance region from invariant mass $W$= $1500$ to $2300$ MeV. Circularly polarized photons and longitudinally polarized deuterons in $H\!D$ have been used with the CLAS detector at Jefferson Lab. The exclusive final state has been extracted using three very different analyses that show excellent agreement, and these have been used to deduce the {\it{E}} polarization observable for an effective neutron target. These results have been incorporated into new partial wave analyses, and have led to significant revisions for several $\gamma nN^*$ resonance photo-couplings. • ### First measurement of the helicity asymmetry $E$ in $\eta$ photoproduction on the proton(1507.00325) Jan. 21, 2016 nucl-ex Results are presented for the first measurement of the double-polarization helicity asymmetry E for the $\eta$ photoproduction reaction $\gamma p \rightarrow \eta p$. Data were obtained using the FROzen Spin Target (FROST) with the CLAS spectrometer in Hall B at Jefferson Lab, covering a range of center-of-mass energy W from threshold to 2.15 GeV and a large range in center-of-mass polar angle. As an initial application of these data, the results have been incorporated into the J\"ulich model to examine the case for the existence of a narrow $N^*$ resonance between 1.66 and 1.70 GeV. The addition of these data to the world database results in marked changes in the predictions for the E observable using that model. Further comparison with several theoretical approaches indicates these data will significantly enhance our understanding of nucleon resonances. • ### First Results from The GlueX Experiment(1512.03699) Jan. 14, 2016 hep-ex, nucl-ex, physics.ins-det The GlueX experiment at Jefferson Lab ran with its first commissioning beam in late 2014 and the spring of 2015. Data were collected on both plastic and liquid hydrogen targets, and much of the detector has been commissioned. All of the detector systems are now performing at or near design specifications and events are being fully reconstructed, including exclusive production of $\pi^{0}$, $\eta$ and $\omega$ mesons. Linearly-polarized photons were successfully produced through coherent bremsstrahlung and polarization transfer to the $\rho$ has been observed. • ### Photoproduction of $\pi^0$-pairs off protons and off neutrons(1510.09167) Oct. 30, 2015 nucl-ex Total cross sections, angular distributions, and invariant-mass distributions have been measured for the photoproduction of $\pi^0\pi^0$ pairs off free protons and off nucleons bound in the deuteron. The experiments were performed at the MAMI accelerator facility in Mainz using the Glasgow photon tagging spectrometer and the Crystal Ball/TAPS detector. The accelerator delivered electron beams of 1508 and 1557~MeV, which produced bremsstrahlung in thin radiator foils. The tagged photon beam covered energies up to 1400~MeV. The data from the free proton target are in good agreement with previous measurements and were only used to test the analysis procedures. The results for differential cross sections (angular distributions and invariant-mass distributions) for free and quasi-free protons are almost identical in shape, but differ in absolute magnitude up to 15\%. Thus, moderate final-state interaction effects are present. The data for quasi-free neutrons are similar to the proton data in the second resonance region (final state invariant masses up to $\approx$1550~MeV), where both reactions are dominated by the $N(1520)3/2^-\rightarrow \Delta(1232)3/2^+\pi$ decay. At higher energies, angular and invariant-mass distributions are different. A simple analysis of the shapes of the invariant-mass distributions in the third resonance region is consistent with strong contributions of an $N^{\star}\rightarrow N\sigma$ decay for the proton, while the reaction is dominated by a sequential decay via a $\Delta\pi$ intermediate state for the neutron. The data are compared to predictions from the Two-Pion-MAID model and the Bonn-Gatchina coupled channel analysis. • ### The isospin structure of photoproduction of pi-eta pairs from the nucleon in the threshold region(1507.01833) July 7, 2015 nucl-ex Photoproduction of $\pi\eta$-pairs from nucleons has been investigated from threshold up to incident photon energies of $\approx$~1.4~GeV. The quasi-free reactions $\gamma p\rightarrow p\pi^0\eta$, $\gamma n\rightarrow n\pi^0\eta$, $\gamma p\rightarrow n\pi^+\eta$, and $\gamma n\rightarrow p\pi^-\eta$ were for the first time measured from nucleons bound in the deuteron. The corresponding reactions from a free-proton target were also studied to investigate final-state interaction effects (for neutral pions the free-proton results could be compared to previous measurements; the $\gamma p\rightarrow n\pi^+\eta$ reaction was measured for the first time). For the $\pi^0\eta$ final state coherent production via the $\gamma d\rightarrow d\pi^0\eta$ reaction was also investigated. The experiments were performed at the tagged photon beam of the Mainz MAMI accelerator using an almost $4\pi$ coverage electromagnetic calorimeter composed of the Crystal Ball and TAPS detectors. The total cross sections for the four different final states obey the relation $\sigma(p\pi^0\eta)$ $\approx$ $\sigma(n\pi^0\eta)$ $\approx$ $2\sigma(p\pi^-\eta)$ $\approx$ $2\sigma(n\pi^+\eta)$ as expected for a dominant contribution from a $\Delta^{\star}\rightarrow\eta\Delta(1232)\rightarrow\pi\eta N$ reaction chain, which is also supported by the shapes of the invariant-mass distributions of nucleon-meson and $\pi$-$\eta$ pairs. The experimental results are compared to the predictions from an isobar reaction model. • ### Determination of the Beam-Spin Asymmetry of Deuteron Photodisintegration in the Energy Region $E_\gamma=1.1-2.3$ GeV(1503.05435) March 18, 2015 nucl-ex The beam-spin asymmetry, $\Sigma$, for the reaction $\gamma d\rightarrow pn$ has been measured using the CEBAF Large Acceptance Spectrometer (CLAS) at the Thomas Jefferson National Accelerator Facility (JLab) for six photon-energy bins between 1.1 and 2.3 GeV, and proton angles in the center-of-mass frame, $\theta_{c.m.}$, between $25^\circ$ and $160^\circ$. These are the first measurements of beam-spin asymmetries at $\theta_{c.m.}=90^\circ$ for photon-beam energies above 1.6 GeV, and the first measurements for angles other than $\theta_{c.m.}=90^\circ$. The angular and energy dependence of $\Sigma$ is expected to aid in the development of QCD-based models to understand the mechanisms of deuteron photodisintegration in the transition region between hadronic and partonic degrees of freedom, where both effective field theories and perturbative QCD cannot make reliable predictions. • ### First Measurement of the Polarization Observable E in the $\vec p(\vec \gamma,\pi^+)n$ Reaction up to 2.25 GeV(1503.05163) March 17, 2015 hep-ex, nucl-ex First results from the longitudinally polarized frozen-spin target (FROST) program are reported. The double-polarization observable E, for the reaction $\vec \gamma \vec p \to \pi^+n$, has been measured using a circularly polarized tagged-photon beam, with energies from 0.35 to 2.37 GeV. The final-state pions were detected with the CEBAF Large Acceptance Spectrometer in Hall B at the Thomas Jefferson National Accelerator Facility. These polarization data agree fairly well with previous partial-wave analyses at low photon energies. Over much of the covered energy range, however, significant deviations are observed, particularly in the high-energy region where high-L multipoles contribute. The data have been included in new multipole analyses resulting in updated nucleon resonance parameters. We report updated fits from the Bonn-Gatchina, J\"ulich, and SAID groups. • ### Towards a resolution of the proton form factor problem: new electron and positron scattering data(1411.6908) Nov. 25, 2014 nucl-ex There is a significant discrepancy between the values of the proton electric form factor, $G_E^p$, extracted using unpolarized and polarized electron scattering. Calculations predict that small two-photon exchange (TPE) contributions can significantly affect the extraction of $G_E^p$ from the unpolarized electron-proton cross sections. We determined the TPE contribution by measuring the ratio of positron-proton to electron-proton elastic scattering cross sections using a simultaneous, tertiary electron-positron beam incident on a liquid hydrogen target and detecting the scattered particles in the Jefferson Lab CLAS detector. This novel technique allowed us to cover a wide range in virtual photon polarization ($\varepsilon$) and momentum transfer ($Q^2$) simultaneously, as well as to cancel luminosity-related systematic errors. The cross section ratio increases with decreasing $\varepsilon$ at $Q^2 = 1.45 \text{ GeV}^2$. This measurement is consistent with the size of the form factor discrepancy at $Q^2\approx 1.75$ GeV$^2$ and with hadronic calculations including nucleon and $\Delta$ intermediate states, which have been shown to resolve the discrepancy up to $2-3$ GeV$^2$. • ### $K^+\Lambda$ and $K^+\Sigma^0$ photoproduction with fine center-of-mass energy resolution(1308.5659) July 3, 2014 nucl-ex Measurements of $\gamma p \rightarrow K^{+} \Lambda$ and $\gamma p \rightarrow K^{+} \Sigma^0$ cross-sections have been obtained with the photon tagging facility and the Crystal Ball calorimeter at MAMI-C. The measurement uses a novel $K^+$ meson identification technique in which the weak decay products are characterized using the energy and timing characteristics of the energy deposit in the calorimeter, a method that has the potential to be applied at many other facilities. The fine center-of-mass energy ($W$) resolution and statistical accuracy of the new data results in a significant impact on partial wave analyses aiming to better establish the excitation spectrum of the nucleon. The new analyses disfavor a strong role for quark-diquark dynamics in the nucleon. • ### Induced polarization of {\Lambda}(1116) in kaon electroproduction(1406.4046) June 16, 2014 nucl-ex We have measured the induced polarization of the ${\Lambda}(1116)$ in the reaction $ep\rightarrow e'K^+{\Lambda}$, detecting the scattered $e'$ and $K^+$ in the final state along with the proton from the decay $\Lambda\rightarrow p\pi^-$.The present study used the CEBAF Large Acceptance Spectrometer (CLAS), which allowed for a large kinematic acceptance in invariant energy $W$ ($1.6\leq W \leq 2.7$ GeV) and covered the full range of the kaon production angle at an average momentum transfer $Q^2=1.90$ GeV$^2$.In this experiment a 5.50 GeV electron beam was incident upon an unpolarized liquid-hydrogen target. We have mapped out the $W$ and kaon production angle dependencies of the induced polarization and found striking differences from photoproduction data over most of the kinematic range studied. However, we also found that the induced polarization is essentially $Q^2$ independent in our kinematic domain, suggesting that somewhere below the $Q^2$ covered here there must be a strong $Q^2$ dependence. Along with previously published photo- and electroproduction cross sections and polarization observables, these data are needed for the development of models, such as effective field theories, and as input to coupled-channel analyses that can provide evidence of previously unobserved $s$-channel resonances. • ### A new determination of the eta transition form factor in the Dalitz decay eta -> e^+ e^- gamma with the Crystal Ball/TAPS detectors at the Mainz Microtron(1309.5648) April 29, 2014 hep-ex, nucl-ex The Dalitz decay eta -> e^+ e^- gamma has been measured in the gamma p -> eta p reaction with the Crystal Ball and TAPS multiphoton spectrometers, together with the photon tagging facility at the Mainz Microtron MAMI. The experimental statistic used in this work is one order of magnitude greater than in any previous measurement of eta -> e^+ e^- gamma. The value obtained for the slope parameter 1/Lambda^2 of the eta transition form factor, 1/Lambda^2 = (1.95 +/- 0.15_stat +/- 0.10_syst) [1/GeV^2], is in good agreement with recent measurements conducted in eta -> e^+ e^- gamma and eta -> mu^+ mu^- gamma decays, as well as with recent form-factor calculations. The uncertainty obtained in the value of 1/Lambda^2 is lower compared to results from previous measurements of the eta -> e^+ e^- gamma decay. • ### Precision measurements of $g_1$ of the proton and the deuteron with 6 GeV electrons(1404.6231) April 24, 2014 nucl-ex The inclusive polarized structure functions of the proton and deuteron, g1p and g1d, were measured with high statistical precision using polarized 6 GeV electrons incident on a polarized ammonia target in Hall B at Jefferson Laboratory. Electrons scattered at lab angles between 18 and 45 degrees were detected using the CEBAF Large Acceptance Spectrometer (CLAS). For the usual DIS kinematics, Q^2>1 GeV^2 and the final-state invariant mass W>2 GeV, the ratio of polarized to unpolarized structure functions g1/F1 is found to be nearly independent of Q^2 at fixed x. Significant resonant structure is apparent at values of W up to 2.3 GeV. In the framework of perturbative QCD, the high-W results can be used to better constrain the polarization of quarks and gluons in the nucleon, as well as high-twist contributions. • ### Measurement of the beam-helicity asymmetry $I^{\odot}$ in the photoproduction of $\pi^0\pi^{\pm}$-pairs off protons and off neutrons(1403.1989) March 8, 2014 nucl-ex Beam-helicity asymmetries have been measured at the MAMI accelerator in Mainz for the photoproduction of mixed-charge pion pairs in the reactions $\boldsymbol{\gamma}p\rightarrow n\pi^0\pi^+$ off free protons and $\boldsymbol{\gamma}d\rightarrow (p)p\pi^0\pi^-$ and $\boldsymbol{\gamma}d\rightarrow (n)n\pi^0\pi^+$ off quasi-free nucleons bound in the deuteron for incident photon energies up to 1.4 GeV. Circularly polarized photons were produced from bremsstrahlung of longitudinally polarized electrons and tagged with the Glasgow-Mainz magnetic spectrometer. The charged pions, recoil protons, recoil neutrons, and decay photons from $\pi^0$ mesons were detected in the 4$\pi$ electromagnetic calorimeter composed of the Crystal Ball and TAPS detectors. Using a complete kinematic reconstruction of the final state, excellent agreement was found between the results for free and quasi-free protons, suggesting that the quasi-free neutron results are also a close approximation of the free-neutron asymmetries. A comparison of the results to the predictions of the Two-Pion-MAID reaction model shows that the reaction mechanisms are still not well understood, in particular at low incident photon energies in the second nucleon-resonance region. • ### Demonstration of a novel technique to measure two-photon exchange effects in elastic $e^\pm p$ scattering(1306.2286) July 10, 2013 nucl-ex The discrepancy between proton electromagnetic form factors extracted using unpolarized and polarized scattering data is believed to be a consequence of two-photon exchange (TPE) effects. However, the calculations of TPE corrections have significant model dependence, and there is limited direct experimental evidence for such corrections. We present the results of a new experimental technique for making direct $e^\pm p$ comparisons, which has the potential to make precise measurements over a broad range in $Q^2$ and scattering angles. We use the Jefferson Lab electron beam and the Hall B photon tagger to generate a clean but untagged photon beam. The photon beam impinges on a converter foil to generate a mixed beam of electrons, positrons, and photons. A chicane is used to separate and recombine the electron and positron beams while the photon beam is stopped by a photon blocker. This provides a combined electron and positron beam, with energies from 0.5 to 3.2 GeV, which impinges on a liquid hydrogen target. The large acceptance CLAS detector is used to identify and reconstruct elastic scattering events, determining both the initial lepton energy and the sign of the scattered lepton. The data were collected in two days with a primary electron beam energy of only 3.3 GeV, limiting the data from this run to smaller values of $Q^2$ and scattering angle. Nonetheless, this measurement yields a data sample for $e^\pm p$ with statistics comparable to those of the best previous measurements. We have shown that we can cleanly identify elastic scattering events and correct for the difference in acceptance for electron and positron scattering. The final ratio of positron to electron scattering: $R=1.027\pm0.005\pm0.05$ for $<Q^2>=0.206$ GeV$^2$ and $0.830\leq \epsilon\leq 0.943$. • ### Measurement of the beam-helicity asymmetry $I^{\odot}$ in the photoproduction of $\pi^0$-pairs off the proton and off the neutron(1304.1919) April 6, 2013 nucl-ex Beam-helicity asymmetries have been measured at the MAMI accelerator in Mainz for the photoproduction of neutral pion pairs in the reactions $\vec{\gamma}p\rightarrow p\pi^0\pi^0$ and $\vec{\gamma}d\rightarrow (n)p\pi^0\pi^0$, $\vec{\gamma}d\rightarrow (p)n\pi^0\pi^0$ off free protons and off quasi-free nucleons bound in the deuteron for incident photon energies up to 1.4 GeV. Circularly polarized photons were produced from bremsstrahlung of longitudinally polarized electrons and tagged with the Glasgow magnetic spectrometer. Decay photons from the $\pi^0$ mesons, recoil protons, and recoil neutrons were detected in the 4$\pi$ covering electromagnetic calorimeter composed of the Crystal Ball and TAPS detectors. After kinematic reconstruction of the final state, excellent agreement was found between the results for free and quasi-free protons. This demonstrates that the free-nucleon behavior of such observables can be extracted from measurements with quasi-free nucleons, which is the only possibility for the neutron. Contrary to expectations, the measured asymmetries are very similar for reactions off protons and neutrons. The results are compared to the predictions from the Two-Pion-MAID reaction model and (for the proton) also to the Bonn-Gatchina coupled channel analysis. • ### Coherent photoproduction of eta-mesons off 3He - search for eta-mesic nuclei(1201.6517) Jan. 31, 2012 nucl-ex Coherent photoproduction of $\eta$-mesons off $^3$He, i.e. the reaction $\gamma ^3{He}\rightarrow \eta ^3{He}$, has been investigated in the near-threshold region. The experiment was performed at the Glasgow tagged photon facility of the Mainz MAMI accelerator with the combined Crystal Ball - TAPS detector. Angular distributions and the total cross section were measured using the $\eta\rightarrow \gamma\gamma$ and $\eta\rightarrow 3\pi^0\rightarrow 6\gamma$ decay channels. The observed extremely sharp rise of the cross section at threshold and the behavior of the angular distributions are evidence for a strong $\eta {^3{He}}$ final state interaction, pointing to the existence of a resonant state. The search for further evidence of this state in the excitation function of $\pi^0$-proton back-to-back emission in the $\gamma ^3{He}\rightarrow \pi^0 pX$ reaction revealed a very complicated structure of the background and could not support previous conclusions. • ### Precise Measurements of Beam Spin Asymmetries in Semi-Inclusive $\pi^0$ production(1106.2293) Sept. 12, 2011 hep-ex We present studies of single-spin asymmetries for neutral pion electroproduction in semi-inclusive deep-inelastic scattering of 5.776 GeV polarized electrons from an unpolarized hydrogen target, using the CEBAF Large Acceptance Spectrometer (CLAS) at the Thomas Jefferson National Accelerator Facility. A substantial $\sin \phi_h$ amplitude has been measured in the distribution of the cross section asymmetry as a function of the azimuthal angle $\phi_h$ of the produced neutral pion. The dependence of this amplitude on Bjorken $x$ and on the pion transverse momentum is extracted with significantly higher precision than previous data and is compared to model calculations. • ### Near-threshold Photoproduction of Phi Mesons from Deuterium(1011.1305) Dec. 14, 2010 nucl-ex We report the first measurement of the differential cross section on $\phi$-meson photoproduction from deuterium near the production threshold for a proton using the CLAS detector and a tagged-photon beam in Hall B at Jefferson Lab. The measurement was carried out by a triple coincidence detection of a proton, $K^+$ and $K^-$ near the theoretical production threshold of 1.57 GeV. The extracted differential cross sections $\frac{d\sigma}{dt}$ for the initial photon energy from 1.65-1.75 GeV are consistent with predictions based on a quasifree mechanism. This experiment establishes a baseline for a future experimental search for an exotic $\phi$-N bound state from heavier nuclear targets utilizing subthreshold/near-threshold production of $\phi$ mesons. • ### Study of the gp-->etap reaction with the Crystal Ball detector at the Mainz Microtron(MAMI-C)(1007.0777) Sept. 25, 2010 hep-ph, hep-ex, nucl-ex, nucl-th The gp-->etap reaction has been measured with the Crystal Ball and TAPS multiphoton spectrometers in the energy range from the production threshold of 707 MeV to 1.4 GeV (1.49 =< W >= 1.87 GeV). Bremsstrahlung photons produced by the 1.5-GeV electron beam of the Mainz Microtron MAMI-C and momentum analyzed by the Glasgow Tagging Spectrometer were used for the eta-meson production. Our accumulation of 3.8 x 10^6 gp-->etap-->3pi0p-->6gp events allows a detailed study of the reaction dynamics. The gp-->etap differential cross sections were determined for 120 energy bins and the full range of the production angles. Our data show a dip near W = 1680 MeV in the total cross section caused by a substantial dip in eta production at forward angles. The data are compared to predictions of previous SAID and MAID partial-wave analyses and to thelatest SAID and MAID fits that have included our data. • ### Tensor Correlations Measured in 3He(e,e'pp)n(1008.3100) Aug. 18, 2010 nucl-ex, nucl-th We have measured the 3He(e,e'pp)n reaction at an incident energy of 4.7 GeV over a wide kinematic range. We identified spectator correlated pp and pn nucleon pairs using kinematic cuts and measured their relative and total momentum distributions. This is the first measurement of the ratio of pp to pn pairs as a function of pair total momentum, $p_{tot}$. For pair relative momenta between 0.3 and 0.5 GeV/c, the ratio is very small at low $p_{tot}$ and rises to approximately 0.5 at large $p_{tot}$. This shows the dominance of tensor over central correlations at this relative momentum. • ### Measurement of Single and Double Spin Asymmetries in Deep Inelastic Pion Electroproduction with a Longitudinally Polarized Target(1003.4549) March 23, 2010 hep-ex, nucl-ex We report the first measurement of the transverse momentum dependence of double spin asymmetries in semi-inclusive production of pions in deep inelastic scattering off the longitudinally polarized proton. Data have been obtained using a polarized electron beam of 5.7 GeV with the CLAS detector at the Thomas Jefferson National Accelerator Facility (JLab). A significant non-zero $\sin2\phi$ single spin asymmetry was also observed for the first time indicating strong spin-orbit correlations for transversely polarized quarks in the longitudinally polarized proton. The azimuthal modulations of single spin asymmetries have been measured over a wide kinematic range.
2020-10-01 18:45:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6207491755485535, "perplexity": 1934.3929615804107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402131986.91/warc/CC-MAIN-20201001174918-20201001204918-00147.warc.gz"}
http://www.bactra.org/weblog/1138.html
## February 25, 2016 ### Denying the Service of a Differentially Private Database Attention conservation notice: A half-clever dig at one of the more serious and constructive attempts to do something about an important problem that won't go away on its own. It doesn't even explain the idea it tries to undermine. Jerzy's "cursory overview of differential privacy" post brings back to mind an idea which I doubt is original, but whose source I can't remember. (It's not Baumbauer et al.'s "Fool's Gold: an Illustrated Critique of Differential Privacy" [ssrn/2326746], though they do make a related point about multiple queries.) The point of differential privacy is to guarantee that adding or removing any one person from the data base can't change the likelihood function by more than a certain factor; that the log-likelihood remains within $\pm \epsilon$. This is achieved by adding noise with a Laplace (double-exponential) distribution to the output of any query from the data base, with the magnitude of the noise being inversely related to the required bound $\epsilon$. (Tighter privacy bounds require more noise.) The tricky bit is that these $\epsilon$s are additive across queries. If the $i^{\mathrm{th}}$ query can change the log-likelihood by up to $\pm \epsilon_i$, a series of queries can change the log-likelihood by up to $\sum_{i}{\epsilon_i}$. If the data-base owner allows a constant $\epsilon$ per query, we can then break the privacy by making lots of queries. Conversely, if the $\epsilon$ per query is not to be too tight, we can only allow a small number of constant-$\epsilon$ queries. A final option is to gradually ramp down the $\epsilon_i$ so that their sum remains finite, e.g., $\epsilon_i \propto i^{-2}$. This would mean that early queries were subject to little distortion, but latter ones were more and more noisy. One side effect of any of these schemes, which is what I want to bring out, is that they offer a way to make the database unusable, or nearly unusable, for everyone else. I make the queries I want (if any), and then flood the server with random, pointless queries about the number of cars driven by left-handed dentists in Albuquerque (or whatever). Either the server has a fixed $\epsilon$ per query, and so a fixed upper limit on the number of queries, or $\epsilon$ grows after each query. In the first case, the server has to stop answering others' queries; in the second, eventually they get only noise. Or --- more plausibly --- whoever runs the server has to abandon their differential privacy guarantee. This same attack would also work, by the way, against the "re-usable holdout". That paper (not surprisingly, given the authors) is basically about creating a testing set, and then answering predictive models' queries about it while guaranteeing differential privacy. To keep the distortion from blowing up, only a limited number of queries can be asked of the testing-set server. That is, the server is explicitly allowed to return NA, rather than a proper answer, and it will always do so after enough questions. In the situation they imagine, though, of the server being a "leaderboard" in a competition among models, the simple way to win is to put in a model early (even a decent model, for form's sake), and then keep putting trivial variants of it in, as often as possible, as quickly as possible. This is because each time I submit a model, I deprive all my possible opponents of one use of the testing set, and if I'm fast enough I can keep them from ever having their models tested at all. Posted at February 25, 2016 11:09 | permanent link
2022-07-07 02:21:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3337278366088867, "perplexity": 823.6616407122309}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683020.92/warc/CC-MAIN-20220707002618-20220707032618-00007.warc.gz"}
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=HOGHC7_2014_v28n4_306
Low-temperature Mechanical Behavior of Super Duplex Stainless Steel Considering High Temperature Environment Title & Authors Low-temperature Mechanical Behavior of Super Duplex Stainless Steel Considering High Temperature Environment Kim, Myung-Soo; Jung, Won-Do; Kim, Jeong-Hyeon; Lee, Jae-Myung; Abstract Super duplex stainless steels (sDSS) are excellent for use under severely corrosive conditions such as offshore and marine applications like pipelines and flanges. sDSS has better mechanical properties and corrosion resistance than the standard duplex stainless steel (DSS) but it is easier for a sigma phase to appear, which depresses the mechanical property and corrosion resistance, compared to DSS, because sDSS has a higher alloy element than DSS. In addition, sDSS has a feeble ductile-brittle transition temperature (DBTT) because it has a 50% ferrite microstructure. In the actual operating environment, sDSS would be thermally affected by welding and a sub-zero temperature environment. This study analyzed how precipitated sDSS behaves at a sub-zero temperature through annealing heat treatment and a sub-zero tensile test. Six types of specimens with annealing times of up to 60 min were tested in a sub-zero chamber. According to the experimental results, an increase in the annealing time reduced the elongation of sDSS, and a decrease in the tensile test temperature raises the flow stress and tensile stress. In particular, the elongation of specimens annealed for 15 min and 30 min was clearly lowered with a decrease in the tensile test temperature because of the increasing sigma phase fraction ratio. Keywords Super duplex stainless steel;Sigma phase;Sub-zero tensile test; Language Korean Cited by References 1. Borvik, T., Lange, H., Marken, L.A., Langseth, M., Hopperstad, O.S., Aursand, M., Rovik, G., 2010. Pipe Fittings in Duplex Stainless Steel with Deviation in Quality caused by Sigma Phase Precipitation. Materials Science and Engineering A, 527(26) 6945-6955. 2. Ghosh, S.K., Mondal, S., 2008. High Temperature Ageing Behaviour of a Duplex Stainless Steel. Materials Characterization, 59(12), 1776-1783. 3. Iris, A.A., 2007. Duplex Stainless Steels: Brief History and Some Recent Alloys. Recent Patents on Mechanical Engineering, 1(1) 51-57. 4. Kashiwar, A., Phani Vennela, N., Kamath, S.L., Khatirkar, R.K., 2012. Effect of Solution Annealing Temperature on Precipitation in 2205 Duplex Stainless Steel. Materials Characterization, 74(1), 55-63. 5. Li, J., Wu, T., Riquier, Y., 1993. $\sigma$ Phase Precipitation and its Effect on The Mechanical Properties of a Super Duplex Stainless Steel. Materials Science and Engineering A, 174(2), 149-156. 6. Momeni, A., Dehghani, K., 2009. Effect of Hot Working on Secondary Phase Formation in 2205 Duplex Stainless Steel. Journal of Mechanical Science and Technology, 26(9), 851-857. 7. Sieurin, H., Sandstrom, R., 2006. Sigma Phase Precipitation in Duplex Stainless Steel 2205. Materials Science and Engineering A, 444(1-2), 271-276.
2017-08-22 16:43:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3412408232688904, "perplexity": 14228.055269092389}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886112533.84/warc/CC-MAIN-20170822162608-20170822182608-00241.warc.gz"}
https://datascience.stackexchange.com/questions/114410/how-to-remove-noise-from-signals
# How to remove noise from signals? I have sensor which outputs signals (two signals bellow for example). I use 2000 signals as my data, which some of them are clear and some of them are bad signals. All clear signals have peaks, and all bad signals are like sinuses but with noise. I am using neural network for training on this data (code bellow). Bellow there is a signal which I want to preserve and a signal with noise which I want to delete it (second one). Is there any method how to do that ? I want to delete those signals because they ruin my accuracy score when I do ANN training. Two signals: My code: X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.4) ann = tf.keras.models.Sequential() # Initialising ANN
2023-03-30 05:52:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39893680810928345, "perplexity": 2096.6302513426526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949097.61/warc/CC-MAIN-20230330035241-20230330065241-00584.warc.gz"}
http://mathhelpforum.com/calculus/11552-summation-real-sequence-print.html
Summation of a real sequence • February 13th 2007, 10:29 AM led5v Summation of a real sequence While trying to prove a mathematical relationship, I ended up with the following term $\sum_{k=0}^{\infty} \sum_{n=0}^k kF(n)G(k-n)$ Where, F and G represent functions that accept inter arguments and return real numbers. Now, the final goal of my proof is equal to $\sum_{x=0}^{\infty} \sum_{y=0}^{\infty} (x+y)F(x)G(y)$ Can some one please help me if the above two can be proved to be equivalent. I would really appreciate if you could please illustrate all the intermediate steps in this proof. • February 13th 2007, 11:17 AM ThePerfectHacker Quote: Originally Posted by led5v While trying to prove a mathematical relationship, I ended up with the following term $\sum_{k=0}^{\infty} \sum_{n=0}^k kF(n)G(k-n)$ Where, F and G represent functions that accept inter arguments and return real numbers. Now, the final goal of my proof is equal to $\sum_{x=0}^{\infty} \sum_{y=0}^{\infty} (x+y)F(x)G(y)$ Can some one please help me if the above two can be proved to be equivalent. I would really appreciate if you could please illustrate all the intermediate steps in this proof. .. • February 13th 2007, 11:29 AM led5v Thanks for your illustrative reply. Now I understand that both of these terms are equal. But still I can't find a formal argument to support this claim. Can this be proved in a formal manner as well using the rules of summations. I would be really thankful if someone could please point me to a formal proof of this goal. • February 14th 2007, 08:52 AM led5v It seems that the senior members in this forum are thinking that I have got my answer, which is not the case. I am still looking for a formal argument to prove the above clain and have looked into a lot of texts on summations but in vain. Any help in this regard would be greatly appreciated. thanks,
2014-10-23 03:12:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7423632740974426, "perplexity": 238.83163971850104}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507449615.41/warc/CC-MAIN-20141017005729-00078-ip-10-16-133-185.ec2.internal.warc.gz"}
https://firas.moosvi.com/oer/physics_bank/content/public/019.Magnetism/The%20Hall%20Effect/Copper%20in%20a%20Magnetic%20Field/cu_in_magnetic_field.html
# # A strip of copper is placed in a uniform magnetic field of magnitude $${{params.B}}\textrm{ T}$$. The Hall electric field is measured to be $${{params.E}} \times 10^{-3}\textrm{ V/m}$$. ## Part 1# What is the drift speed of the conduction electrons? ### Answer Section# Please enter a numeric value. ## Part 2# Assuming that $$n = {{params.n}} \times 10^{28}$$ electrons per cubic meter and that the cross-sectional area of the strip is $${{params.A}} \times 10^{-6} \rm\ { m^{2}}$$, calculate the current in the strip. ### Answer Section# Please enter a numeric value. ## Attribution# Problem is licensed under the CC-BY-NC-SA 4.0 license.
2023-03-30 16:59:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7677058577537537, "perplexity": 1401.9143238544873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949355.52/warc/CC-MAIN-20230330163823-20230330193823-00079.warc.gz"}
http://tex.stackexchange.com/questions/42068/contribute-a-diagram-to-a-package-author?answertab=active
# Contribute a diagram to a package author? [closed] Well, this turned out to be a completely different question than the one I started writing! Originally I was wondering how to make a version of \textit (and a parallel version of \textsl) that automatically tracks the nesting level and toggles on/off based on the oddness/evenness of the nesting depth. \emph of course does this, but it isn't guaranteed to render in italics, and there doesn't seem to be a corresponding slant version of \emph for slant. And then I discovered the slemph package, which provides exactly what I want. Sometimes, while formulating a question, one thinks of better search terms/phrases and is better able to find the answer! Anyway, I figured that a picture would be the best way to ask my question (originally), so I started out by making this drawing, which describes fictional macros \RM, \IT, \SL, and \BF which behave well together like \textit, \textsl, \textbf, etc.: And then I made this, which demonstrates the inner workings of the decision-making that I was hoping to hack together: But again, it turns out that \textitswitch and \textslswitch of package slemph already do exactly what I want, and I basically wasted my time making the diagrams. But I did learn some new things. :-) So, as long as I have these diagrams (and the accompanying LaTeX source), I'm wondering if there is a preferred way to submit things to package authors for their consideration. Is it acceptable to just e-mail them? Or is there a more formal gateway through CTAN? This particular package hasn't been updated in 13 years. I would definitely find a diagram like this helpful in the documentation for the package. I actually came across the package halfway through my investigation, but brushed it aside carelessly because I didn't realize from the description that it actually did what I wanted! A picture like this would have led me to see it immediately. - ## closed as not a real question by lockstep, percusse, Joseph Wright♦Mar 4 '12 at 17:46 It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center. If this question can be reworded to fit the rules in the help center, please edit the question. 1) These diagrams are awesome. 2) This question has been asked and answered on TeX - LaTeX Meta, but I don't quite remember why on meta: meta.tex.stackexchange.com/questions/1887/…. 3) You could still ask a question about your initial problem, just to share the knowledge. Cf. meta.tex.stackexchange.com/questions/4/…. 4) Perhaps you could even make up a question to share the source of your diagrams? ;) –  doncherry Jan 24 '12 at 0:09 I upvoted this to cancel the automatic "not a real question" downvote. –  lockstep Mar 4 '12 at 17:59
2015-08-02 02:26:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7276983261108398, "perplexity": 1067.8447887782063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988930.94/warc/CC-MAIN-20150728002308-00175-ip-10-236-191-2.ec2.internal.warc.gz"}
http://tex.stackexchange.com/tags/table-of-contents/hot
# Tag Info 4 4 3 Following Astrinus's advice, you can add the two lines of code between \makeatletter and \makeatother, given in egreg's answer, to modify the entry in the table of contents. Here is how you can do this: \documentclass[12pt, a4paper]{article} \makeatletter \let\latexl@section\l@section ... 3 2 You can easily do that with the titlesec/titlestoc package. It has commands that allow for a different formatting of numbered and unnumbered sections (if you want to add unnumbered sections) in the table of contents: \documentclass[a4paper]{article} \usepackage{titletoc} \begin {document} \titlecontents{section}[0em] {\vskip 0.5ex}% {\scshape}% numbered ... 1 This happens because the bemaer column system uses the LaTeX minipage behind the scenes: a vertical box. As these are not set with a fixed height they don't stretch, in contrast to setting a frame where beamer does some resizing (so the stretch is important). For a one-off application I'd be tempted to use a raw minipage and adjust as require. For example ... 1 To start the appendices section, you use the command \appendix, not an environment. Change \begin{appendices} to \appendix and remove \end{appendices} If you need more flexibility, load Will Robert­son and Peter Wilson’s appendix-package. Here is an ‘MWE’ something that at least compile: \documentclass[ 11pt, a4paper, draft, ... Only top voted, non community-wiki answers of a minimum length are eligible
2014-12-19 20:17:34
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9324895143508911, "perplexity": 9312.341696755333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768847.78/warc/CC-MAIN-20141217075248-00130-ip-10-231-17-201.ec2.internal.warc.gz"}
http://www.talkstats.com/threads/regression-interpretation.28291/#post-94906
# regression interpretation #### Snowy88 ##### New Member Which results will I use to determine the difference in males and females when all other predictors are held constant. I have 4 predictors (status, income, verbal score and spending). I did a lm as: fit<-lm(spending ~ sex, data=spending) and produce a result Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 29.775 5.498 5.415 2.28e-06 *** sex -25.909 8.648 -2.996 0.00444 ** However, other students used the original LM and were correct that females spend 22.12 less than men. Call: lm(formula = spending ~ sex + status + income + verbal, data =spending) Residuals: Min 1Q Median 3Q Max -51.082 -11.320 -1.451 9.452 94.252 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 22.55565 17.19680 1.312 0.1968 sex -22.11833 8.21111 -2.694 0.0101 * status 0.05223 0.28111 0.186 0.8535 income 4.96198 1.02539 4.839 1.79e-05 * verbal -2.95949 2.17215 -1.362 0.1803 #### trinker ##### ggplot2orBust You say: the difference in males and females when all other predictors are held constant [and] Code: Call: lm(formula = spending ~ sex + status + income + verbal, data =spending) I would think then that you're model should include all the predictors before the sex one. Code: Call: lm(formula = spending ~ status + income + verbal + sex, data =spending) . Also, if spending is your outcome variable it can't be your predictor too (at least I'm assuming you have a model predicting spending; but you're trying to predict sex, though it doesn't seem so, perhaps binary logistic regression is a better approach)
2022-07-01 17:47:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5829677581787109, "perplexity": 5651.589540728816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103943339.53/warc/CC-MAIN-20220701155803-20220701185803-00542.warc.gz"}
http://pilot.cnxproject.org/content/collection/col10064/latest/module/m10766/latest
Site Feedback # Introduction Given a line 'l' and a point 'p' in the plane, what's the closest point 'm' to 'p' on 'l'? Same problem: Let $xx$ and $vv$ be vectors in $R2 2$. Say $∥v∥=1 v 1$. For what value of $αα$ is $∥x−α⁢v∥ 2 x α v 2$ minimized? (what point in span{v} best approximates $xx$?) The condition is that $x− α ^ ⁢v x α ^ v$ and $α⁢v α v$ are orthogonal. # Calculating α How to calculate $α ^ α ^$? We know that ( $x− α ^ ⁢v x α ^ v$) is perpendicular to every vector in span{v}, so $∀β,∀⁢β:⟨x− α ^ ⁢v,β⁢v⟩=0 β ∀ β x α ^ v β v 0$ $β¯⁢⟨(x,v)⟩− α ^ ⁢β¯⁢⟨(v,v)⟩=0 β x v α ^ β v v 0$ because $⟨v,v⟩=1 v v 1$, so $(⟨(x,v)⟩− α ^ =0)⇒( α ^ =⟨x,v⟩) x v α ^ 0 α ^ x v$ Closest vector in span{v} = $⟨(x,v)⟩⁢v x v v$, where $⟨(x,v)⟩⁢v x v v$ is the projection of $xx$ onto $vv$. We can do the same thing in higher dimensions. Exercise 1 Let $V⊂H V H$ be a subspace of a Hilbert space H. Let $x∈H x H$ be given. Find the $y∈V y V$ that best approximates $xx$. i.e., $∥x−y∥ x y$ is minimized. Solution 1. Find an orthonormal basis $b1…bk b1 … bk$ for $VV$ 2. Project $xx$ onto $VV$ using $y=∑i=1k⟨(x,bi)⟩⁢bi y i 1 k x bi bi$ then $yy$ is the closest point in V to x and (x-y) ⊥ V ( $∀v,∀⁢v∈V:⟨x−y,v⟩=0 v ∀ v V x y v 0$ Example 1 $x∈R3 x 3$, $V=span⁢( 1 0 0 )( 0 1 0 ) V span 1 0 0 0 1 0$, $x=( a b c ) x a b c$. So, $y=∑i=12⟨(x,bi)⟩⁢bi=a⁢( 1 0 0 )+b⁢( 0 1 0 )=( a b 0 ) y i 1 2 x bi bi a 1 0 0 b 0 1 0 a b 0$ Example 2 V = {space of periodic signals with frequency no greater than $3⁢ w0 3 w0$}. Given periodic f(t), what is the signal in V that best approximates f? 1. { $1T⁢ei⁢ w0 ⁢k⁢t 1 T w0 k t$, k = -3, -2, ..., 2, 3} is an ONB for V 2. $g⁢t=1T⁢∑k=-33⟨(f⁢t,ei⁢ w0 ⁢k⁢t)⟩⁢ei⁢ w0 ⁢k⁢t g t 1 T k -3 3 f t w0 k t w0 k t$ is the closest signal in V to f(t) ⇒ reconstruct f(t) using only 7 terms of its Fourier series. Example 3 Let V = {functions piecewise constant between the integers} 1. ONB for V. where {$bibi$} is an ONB. Best piecewise constant approximation? $g⁢t=∑i=−∞∞⟨(f, bi )⟩⁢ bi g t i f bi bi$ $⟨f, bi ⟩=∫−∞∞f⁢t⁢ bi ⁢tdt=∫i−1if⁢tdt f bi t f t bi t t i 1 i f t$ Example 4 This demonstration explores approximation using a Fourier basis and a Haar Wavelet basis.See here for instructions on how to use the demo.
2018-11-13 18:34:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 41, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8301738500595093, "perplexity": 772.3103496290278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741340.11/warc/CC-MAIN-20181113173927-20181113195927-00149.warc.gz"}
http://math.stackexchange.com/questions/697607/triangular-number-which-is-also-a-power-of-2-number
# Triangular number which is also a Power of 2 number? [closed] Is there triangular number which is also a power of 2 number? $(n^2+n)/2 = 2^n$ beside 1. - ## closed as off-topic by Thursday, Claude Leibovici, Tunk-Fey, Hakim, MoronAug 3 at 10:19 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Thursday, Claude Leibovici, Tunk-Fey, Hakim, Moron If this question can be reworded to fit the rules in the help center, please edit the question. You seem to have two different $n$'s in there! –  TonyK Mar 3 at 13:09 Since this is a homework question,you should also add your own thoughts and ideas to the body of this question.That will prevent people from telling you things which you already know.Also,please don't use 'homework' as a stand-alone tag.Please reconsider editing your post to add more tags and adding your own thoughts. –  rah4927 Mar 3 at 13:09 And no,you cannot have a triangular number equal to a power of 2. –  rah4927 Mar 3 at 13:12 $$\frac{n(n+1)}2=\frac n2(n+1)=n\frac{n+1}2$$ Thus, either $\;n\;$ or $\;n+1\;$ is odd and thus there's a prime factor different from two... - Hint: from your expression above, show that every triangular number has a prime factor which is not 2. - No. Triangular numbers are of the form $\displaystyle \frac{n(n+1)}{2}$, and at least one of the two ($n$ or $n+1$) has to be an odd number. And unless $n = 0$ or $n = 1$, the odd number will prevent the triangular number from being a power of $2$. When $n = 0$, $\displaystyle \frac{n(n+1)}{2} = 0$, which by default is not a power of any number other than itself. When $n = 1$, $\displaystyle \frac{n(n+1)}{2} = 1$, which is the trivial example (and one you are not looking for). -
2014-08-27 11:03:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7071832418441772, "perplexity": 746.0130728879055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829210.91/warc/CC-MAIN-20140820021349-00223-ip-10-180-136-8.ec2.internal.warc.gz"}
http://mathv.chapman.edu/~jipsen/structures/doku.php/semilattices_with_identity
## Semilattices with identity Abbreviation: Slat$_1$ ### Definition A semilattice with identity is a structure $\mathbf{S}=\langle S,\cdot,1\rangle$ of type $\langle 2,0\rangle$ such that $\langle S,\cdot\rangle$ is a semilattices $1$ is an indentity for $\cdot$: $x\cdot 1=x$ ##### Morphisms Let $\mathbf{S}$ and $\mathbf{T}$ be semilattices with identity. A morphism from $\mathbf{S}$ to $\mathbf{T}$ is a function $h:S\rightarrow T$ that is a homomorphism: $h(x\cdot y)=h(x)\cdot h(y)$, $h(1)=1$ Example 1: ### Properties Classtype variety decidable in PTIME decidable undecidable no unbounded no no no no no ### Finite members $\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ f(6)= &\\ \end{array}$
2020-09-26 09:49:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9642490744590759, "perplexity": 2677.0191413070747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400238038.76/warc/CC-MAIN-20200926071311-20200926101311-00220.warc.gz"}
https://brilliant.org/problems/30-second-or-less/
# Googly Eyes Geometry Level 2 In the figure above, the value of $\overline{AB} = 8$, find the radius of the circle $C_2$. × Problem Loading... Note Loading... Set Loading...
2019-08-17 14:12:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 8, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9466025829315186, "perplexity": 5726.978513851395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313259.30/warc/CC-MAIN-20190817123129-20190817145129-00110.warc.gz"}
https://sicp.comp.nus.edu.sg/chapters/4.4.3.html
[1] That a particular method of inference is legitimate is not a trivial assertion. One must prove that if one starts with true premises, only true conclusions can be derived. The method of inference represented by rule applications is modus ponens, the familiar method of inference that says that if A is true and A implies B is true, then we may conclude that B is true. [2] We must qualify this statement by agreeing that, in speaking of the inference accomplished by a logic program, we assume that the computation terminates. Unfortunately, even this qualified statement is false for our implementation of the query language (and also false for programs in Prolog and most other current logic programming languages) because of our use of not and javascript_value. As we will describe below, the not implemented in the query language is not always consistent with the not of mathematical logic, and javascript_value introduces additional complications. We could implement a language consistent with mathematical logic by simply removing not and javascript_value from the language and agreeing to write programs using only simple queries, and, and or. However, this would greatly restrict the expressive power of the language. One of the major concerns of research in logic programming is to find ways to achieve more consistency with mathematical logic without unduly sacrificing expressive power. [3] This is not a problem of the logic but one of the procedural interpretation of the logic provided by our interpreter. We could write an interpreter that would not fall into a loop here. For example, we could enumerate all the proofs derivable from our assertions and our rules in a breadth-first rather than a depth-first order. However, such a system makes it more difficult to take advantage of the order of deductions in our programs. One attempt to build sophisticated control into such a program is described in deKleer et al. 1977. Another technique, which does not lead to such serious control problems, is to put in special knowledge, such as detectors for particular kinds of loops (exercise 4.58). However, there can be no general scheme for reliably preventing a system from going down infinite paths in performing deductions. Imagine a diabolical rule of the form To show $P(x)$ is true, show that $P(f(x))$ is true, for some suitably chosen function $f$. [4] Consider the query not(baseball_fan(list("Bitdiddle", "Ben"))). The system finds that baseball_fan(list("Bitdiddle", "Ben")) is not in the data base, so the empty frame does not satisfy the pattern and is not filtered out of the initial stream of frames. The result of the query is thus the empty frame, which is used to instantiate the input query to produce not(baseball_fan(list("Bitdiddle", "Ben"))). [5] A discussion and justification of this treatment of not can be found in the article by Clark (1978). 4.4.3 Is Logic Programming Mathematical Logic?
2020-06-05 03:43:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5910935997962952, "perplexity": 497.23957359304296}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348492427.71/warc/CC-MAIN-20200605014501-20200605044501-00478.warc.gz"}
https://www.z-car.com/blog/category/humor
## Does Newt Gingrich look like Dwight Schrute? Picture of both Some folks are claiming that Dwight and Newt are one and the same person.  I find this hard to believe.  Certainly the age difference seems to be a factor, and, as everyone knows Dwight is a beet farmer from Pennsylvania while Newt was a College Professor from Georgia. Some folks will claim that it is not really Dwight, but the actor Rainn Wilson who is the true Doppelgänger. What do you think folks, do you see a resemblance? ## Funny Dancer – Microsoft Kinect You guys have probably seen the Microsoft Kinect for the Xbox 360.  It is a motion control accessory that allows you to interact with your video games, just like the Wii.  Microsoft has done a pretty good job with this device, although it seems as if they are still waiting for the killer app to really drive sales.  Recently when I was at the Lenox Square Mall in Atlanta (a Simon Mall), I caught a glance of this guy getting his inner groove on with a public demo of the Kinect and Dance Central.  This guy really got into his dance routine.  Got love it….  Can you say hilarious! ## Mystery Missile Solved! Obviously AWE808 from Hawaii Once again the news agencies prove incapable of actually doing some basic research to identify the “Mystery Missile”.  Ten minutes of Google searching will give you access to many sites that have very similar pictures as those seen with the Mystery Missile.   Once you actually see these contrails from multiple locations, it is clear that the “Mystery Missile” was simply a plane contrail that you can see everyday if you look closely enough.  No conspiracy, no military hoax, not connected to cruise ships breaking down in the ocean.  Just a simple plane traveling from Hawaii to Phoenix, which it does everyday. The picture above shows the exact same “Missile Launch” 24 hours later. Think about it, you have two choices : Option 1 – An airliner with a contrail, made to look thicker than usual because it is flying towards the camera, and appearing to “come out of the sea” because it came from over the horizon. Data for this includes several known simiar examples, a known flight in the area doing the right path at the right time, and lack of anything unusual on FAA ATC tapes. OR Option 2 – An unknown SLBM launch undetected by anyone, evidence for which is “I seen some rocket launches and they look like this – honest”. Now I find that 3.14 does not actually equal pie…   $\pi\ne\text{pie}$ and apparantly $\pi\ne 3.14$.
2019-03-21 19:42:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30175694823265076, "perplexity": 3346.1203086501137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202572.29/warc/CC-MAIN-20190321193403-20190321215403-00061.warc.gz"}
https://tex.stackexchange.com/questions/214773/comma-links-in-ltr-texts-in-rtl-arabic-document
# Comma links in LTR texts in RTL arabic document I insert an english fragment into arabic text. If there is a comma after a digit, I get surprising result. The comma goes before the number, not after. The font above is "Amiri". I tested also "arabtype.ttf" from Microsoft (http://www.microsoft.com/typography/OpenTypeDev/arabic/intro.htm) and got the same result. Therefore I consider it is not a bug, but some important feature. A general solution is to somehow remove script=arabic from font for LTR texts. I suspect it is a hard task. An easy solution for this concrete problem is to add \hbox{} after the number. Then: The question 2: are there other surprises for english text fragments when using script=arabic? Sample code: \documentclass[a4paper]{article} \usepackage[RTLdocument]{bidi} \begin{document} \font\f="Amiri:script=arabic" \f \LRE{Print 42, then exit, but not.} \LRE{Print 42: then exit: but not.} \end{document} • Further special cases: 1) I should also add \hbox{} before the number 2) Parenthesis are mirrored 3) I stopped here and gave up. Now I do change fonts. Still, I'm happy to hear the answers. – olpa Dec 4 '14 at 9:03 You should use fontspec to select fonts and font features. You can change the script now using \addfontfeature{Script=Latin}. To make things more convenient it is possible to renew \LRE: \documentclass[a4paper]{article} \usepackage{fontspec} \setmainfont[Script=Arabic]{Amiri} \usepackage[RTLdocument]{bidi} \let\oldLRE\LRE \renewcommand*{\LRE}[1]{\oldLRE{\addfontfeature{Script=Latin} #1}} % <-- Change script to latin within \LRE \begin{document} \LRE{Print 42, then exit, but not.} \LRE{Print 42: then exit: but not.} \end{document} Which gives you: • This is what exactly I wanted to avoid. In general case I don't know which fonts are in use. Therefore a good solution should somehow switch "script=Arabic" on and off. – olpa Dec 4 '14 at 9:01 • In that case use \addfontfeature{Script=Latin}. I've updated my answer accordingly. – DG' Dec 4 '14 at 9:05
2020-02-19 06:51:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.857788622379303, "perplexity": 3215.824387804552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144058.43/warc/CC-MAIN-20200219061325-20200219091325-00512.warc.gz"}
https://www.nag.com/numeric/py/nagdoc_latest/naginterfaces.library.sum.fft_cosine.html
# naginterfaces.library.sum.fft_​cosine¶ naginterfaces.library.sum.fft_cosine(x)[source] fft_cosine computes the discrete Fourier cosine transforms of sequences of real data values. The elements of each sequence and its transform are stored contiguously. For full information please refer to the NAG Library document for c06rf https://www.nag.com/numeric/nl/nagdoc_27.3/flhtml/c06/c06rff.html Parameters xfloat, array-like, shape The data values of the th sequence to be transformed, denoted by , for , for , must be stored in . Returns xfloat, ndarray, shape The components of the th Fourier cosine transform, denoted by , for , for , are stored in , overwriting the corresponding original values. Raises NagValueError (errno ) On entry, . Constraint: . (errno ) On entry, . Constraint: . (errno ) An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance. Notes Given sequences of real data values , for , for , fft_cosine simultaneously calculates the Fourier cosine transforms of all the sequences defined by (Note the scale factor in this definition.) This transform is also known as type-I DCT. Since the Fourier cosine transform defined above is its own inverse, two consecutive calls of fft_cosine will restore the original data. The transform calculated by this function can be used to solve Poisson’s equation when the derivative of the solution is specified at both left and right boundaries (see Swarztrauber (1977)). The function uses a variant of the fast Fourier transform (FFT) algorithm (see Brigham (1974)) known as the Stockham self-sorting algorithm, described in Temperton (1983), together with pre - and post-processing stages described in Swarztrauber (1982). Special coding is provided for the factors , , and . References Brigham, E O, 1974, The Fast Fourier Transform, Prentice–Hall Swarztrauber, P N, 1977, The methods of cyclic reduction, Fourier analysis and the FACR algorithm for the discrete solution of Poisson’s equation on a rectangle, SIAM Rev. (19(3)), 490–501 Swarztrauber, P N, 1982, Vectorizing the FFT’s, Parallel Computation, (ed G Rodrique), 51–83, Academic Press Temperton, C, 1983, Fast mixed-radix real Fourier transforms, J. Comput. Phys. (52), 340–350
2021-10-27 09:44:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9108434915542603, "perplexity": 2448.1948594315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588113.25/warc/CC-MAIN-20211027084718-20211027114718-00429.warc.gz"}
https://www.rba.gov.au/publications/bulletin/2019/dec/long-term-growth-in-china.html
# Bulletin – December 2019 Global Economy Long-term Growth in China ## Abstract Slowing trend growth in China, and the risks around this trajectory, are relevant to the future economic prospects of its major trading partners, including Australia. This article provides a long-term perspective on growth in China, beginning with a review of historical trends. It then examines the drivers of growth since reforms were introduced in the late 1970s and how these drivers are affecting the growth outlook. The article concludes that a range of structural headwinds will constrain growth in the coming decade, posing challenges for policymakers. ## Introduction China is Australia's largest trading partner, and it is likely to remain so for the foreseeable future. In both values and volumes, trade with China has eclipsed Australia's other major trading partners since the late 2000s (Graph 1). The trade relationship with China has also broadened over time. While bilateral trade continues to be dominated by Australian exports of resources, such as iron ore, coal and liquefied natural gas, exports of services (especially tourism and education) and rural goods have also grown rapidly in recent years (Graph 2). Rapid growth in services exports has been reflected in large numbers of visitor arrivals from China, which have driven the overall upward trend in arrivals to Australia over the past decade. The growth in Australia's exports to China has been closely connected to domestic conditions in China. Rapid expansion of the Chinese economy in the 2000s, and a highly investment-intensive pattern of growth, spurred demand for heavy industrial products, such as steel. In turn, this has driven a sharp increase in Chinese imports of steelmaking raw materials: iron ore and coking coal. More recently, rising household incomes in China have underpinned a preference shift towards high-quality imported agricultural and health products (including infant formula and vitamin supplements) and increased demand for overseas travel and tertiary education services. The expansion of Chinese demand in the mid 2000s outstripped the global supply of resource commodities, which boosted Australia's terms of trade and thereby supported Australian national income and government revenues (for example, through collections of resource rent taxes). It also led to significant compositional changes in Australia's labour market as workers were absorbed by the rapidly growing mining sector and associated services industries, including accounting, legal and engineering services. The depth of these linkages means that the potential for growth in China to slow further, either gradually or sharply, represents a significant risk for the Australian economy. This article analyses China's growth performance in its longer-term context and examines how underlying structural drivers of growth have shifted in recent years. It then considers the growth outlook. Finally, the article discusses the uncertainties around this trajectory, focusing on financial risks and the escalating US–China trade and technology disputes. ## Long-term Economic Trends The People's Republic of China (PRC) has experienced pronounced swings in growth since its founding in 1949 (Graph 3). While data from official sources and alternative calculations made by academics (for example, Wu (2014)) have periodically diverged substantially, over the long term, different estimates of Chinese GDP growth display broadly similar trends. In general, growth was highly volatile during the period during which China was led by Chairman Mao Zedong (1949–76) but significantly less so during the era of economic reforms that started in the late 1970s. The volatile growth pattern in the 1950s and 1960s was largely a consequence of the economic system that emerged during these years, but was also compounded by external factors. The devastation inflicted by the war with Japan (1937–45) and the Chinese Civil War (1927–49) necessitated the rebuilding of a large amount of infrastructure, housing and manufacturing capacity. The new government was also keen to develop heavy industry, so economic growth was initially strong. In these early years, despite radical redistribution of land to poorer farmers in rural areas, the Chinese Communist Party (CCP) initially tolerated private ownership, allowing private business and farming practices to continue in many areas (Naughton 2007, p 65). However, by the late 1950s, the introduction of central planning on a large scale began to affect economic outcomes. In rural areas, the authorities attempted to achieve economies of scale by amalgamating traditional small plots of land into cooperatives or collectives (and eventually even larger communes) worked by large numbers of families, who shared in the gains from production (Perkins 1964).[1] In urban areas, adults were assigned to ‘work units’ or danwei (such as factories) and, in compensation for their labour, received ration vouchers for grain and other essentials (Chinn 1980), as well as guaranteed housing, medical care and education for their children. Population mobility was discouraged; households were assigned urban or rural registration permits (hukou) that largely confined them to the area in which their members worked. Annual production targets and a schedule of prices for key commodities were set centrally and the state effectively assumed responsibility for allocating resources throughout society.[2] The system encountered severe challenges. A huge burden fell on government officials to make correct decisions regarding resource allocation, which then had to be implemented by Party members at the local level. Calibrating centrally determined policy guidance to local conditions was difficult given the size and geographical diversity of China, and local officials often lacked relevant management, agriculture or industry experience (Perkins 1964). In addition, the system distorted incentives: productive workers received the same reward as unproductive workers, which reduced their motivation to work. The periods of greatest weakness tended to coincide with radical changes in economic policies and in the political environment. Efforts to impose overly ambitious production targets during the Great Leap Forward (1958–60), exacerbated by a series of natural disasters, led to sharply weaker growth, and contributed to the country experiencing a catastrophic famine in 1962, estimated to have caused the loss of 25–30 million lives (Naughton 2007, p 72). The economy also entered recession during the immense social upheaval of the Great Proletarian Cultural Revolution (officially dated from 1966 to 1976). The consequences of central planning prompted the leadership to change course at the 3rd Plenum of the CCP's 11th National Congress in December 1978. Led by Party elder Deng Xiaoping, the CCP embarked on efforts to build a hybrid economy that allowed markets to play a greater role, albeit constrained by tight administrative controls. The first stage of reforms was to reverse the policy of collectivisation in the countryside, and reintroduce markets (and market prices) for agricultural goods. This proved crucial in increasing agricultural productivity, especially in grain production (Garnaut and Ma 1996). Subsequent reforms endeavoured to incentivise managers in the corporate sector to make state-owned enterprises (SOEs) more efficient and profitable. Throughout the 1980s and 1990s, the government loosened barriers to trade and foreign investment, which helped develop the country's manufacturing export sector and gave Chinese firms the opportunity to learn foreign technologies. These reforms, in turn, created the need for a modern financial system. Prior to the reforms, there was little need for banks to intermediate between lenders and borrowers, since investment was mainly financed by budgetary grants and the retained profits of enterprises, and household savings were small (Lardy 1998, pp 59–61). However, the growing investment needs of urban and rural enterprises, rising household incomes, and the gradual replacement of the strict coupon-based rationing system with a cash economy, created the need for a commercial banking system. Through the 1980s and 1990s, a large number of banks and smaller non-bank financial institutions came into operation. An important aspect of the reforms was the relaxation of controls on the prices of many goods and services that had been relatively stable under central planning (Graph 4). Yet the dangers of rapid price reform soon became apparent; during 1988–89, a period of strong growth, inflation surged to nearly 20 per cent, exacerbating political and social tensions. The government responded by implementing strict austerity measures to lower inflation, including cutting public spending, instructing banks to stop lending and reimposing price controls. While this brought inflation temporarily under control, the consequence was a sharp slowdown in parts of the economy in the late 1980s (Brandt and Zhu 2000). In a bid to reinvigorate the reform agenda, Deng Xiaoping visited several locations in southern China in 1992, giving his personal endorsement to the reform strategies being pursued there. This was followed up at the CCP's 14th National Congress in 1992 by pledges to build a ‘socialist market economy’, and more detailed plans that were issued in 1993 (Wu 2019). These efforts contributed to a quick recovery in growth, but also inflation. High inflation was subsequently brought under control through tighter monetary and financial policies, and measures to increase food production and imports, which alleviated upward pressure on food prices (Oppers 1997). The most important milestone in the 1990s was the reform of SOEs. Under the ‘work unit’ system, SOEs were responsible for the employment, social welfare and housing of a sizeable population; but since many were unprofitable, a large part of this welfare burden was ultimately shifted to the state. By encouraging forced layoffs of unproductive workers, and allowing smaller SOEs to be privatised, the government was able to markedly improve the efficiency of the corporate sector. Firms were forced to become profitable to survive, reducing the burden on state finances from unprofitable enterprises. The reforms also withdrew the obligation of SOEs to provide housing for workers. Instead, starting in 1998, households were permitted to purchase and sell housing that had been allocated to them, leading to the emergence of a flourishing private housing market. The reforms to SOEs heralded the end of the state-guaranteed system of social security, while also boosting the efficiency of the corporate sector. The associated housing reforms also had a lasting influence. On the one hand, during a period when real interest rates were frequently negative due to high rates of inflation, they gave people a place other than the often-volatile stock market (established only in 1990) to invest their savings. On the other hand, the creation of a housing market encouraged a huge boom in property development and investment that supported growth more broadly. While the late 1990s were a turbulent period for the economy for other reasons (not least of which were the Asian Financial Crisis in 1997 and a non-performing loan crisis in the banking sector), in the aftermath of these problems the Chinese economy received a major boost from its accession to the World Trade Organisation (WTO) in 2001. WTO entry required China to remove more restrictions on exports, imports and foreign investment, which enhanced China's access to overseas markets and increased the flow of trade and foreign investment through the 2000s. The global financial crisis (GFC) in 2008–09 magnified a slowing in growth that was already becoming apparent as the positive effects of earlier reforms started to wane. The GFC led to a sharp fall in advanced economies' demand for Chinese exports, which weighed heavily on domestic manufacturing. The Chinese Government's fiscal and monetary stimulus response to the crisis temporarily lifted GDP growth, largely by supporting investment in housing and infrastructure. More importantly, however, it forestalled the even sharper downturn in growth that would have eventuated in the absence of such a vigorous response. ## Growth in the Reform Era Economic growth in the period since 1978 has largely been driven by structural forces – in particular, industrialisation, privatisation, urbanisation and demographic change. The reform era saw China industrialise on a huge scale (Graph 5). Growth in the industrial sector was especially strong in the 1990s and has remained a significant contributor to GDP growth until quite recently. The growth of the industrial sector was related in part to China's growing role in the global economy; over this period, Chinese exports increased from less than 1 per cent of global exports to more than 12 per cent. Since 2011, however, the pattern of domestic growth has shifted, being increasingly reliant on services rather than industrial production. A second outcome of the reform era was the erosion of central planning and a flourishing of private enterprise. In 1997, the government endorsed the privatisation of the majority of SOEs nationwide, mainly through sales to existing managers and other firms, while retaining state ownership of large firms in strategic industries (Gan 2009). The SOE reforms underpinned a sharp compositional shift in urban employment (Graph 6). SOEs' share of urban employment declined from almost two-thirds in 1990 to 15 per cent by 2017, while private employment soared. The changing ownership of firms also contributed to the productivity and profitability of the business sector, as private industrial firms were typically much more efficient and profitable than state firms (Graph 7). A third trend, reinforced by economic reforms, was urbanisation. Rapid economic growth and a strong demand for labour in urban areas, especially in the burgeoning private sector, encouraged people to move from rural areas in pursuit of more lucrative job opportunities in the cities (Graph 8). This was facilitated by the abolition of the commune system and the relaxation of geographic restrictions on farmers' employment (Cai 2018). Although people newly arrived to cities could get work, the hukou system continued to restrict their access to the healthcare, pension and education benefits enjoyed by urban residents. The sustained movement of people from often unproductive jobs in agriculture to productive jobs in cities helped to boost aggregate productivity growth (Zhu 2012). It also helped fuel the boom in housing construction and the growth of transport infrastructure to facilitate the movement of millions of people each year into urban areas. A fourth trend that complemented the economic reforms was the rise in the working-age population. After a baby boom at the end of the Mao era, the working-age population surged (Graph 8). Subsequently, the birth rate declined for a number of reasons, including constraints imposed by the government's ‘one-child’ policy initiated in the early 1980s, and an emerging preference among households for smaller families as living standards and education levels improved (Cai 2018). Rapid growth in the working-age population created a large supply of workers that contributed both to increased production and growth in aggregate demand. However, since 2011, the total working-age population has begun to fall. The urban workforce is still increasing as a result of urbanisation, but its growth rate has started to moderate as the birth rate has fallen and the population has aged. The combination of rapid industrialisation, continuous urban expansion and a burgeoning private sector underpinned a highly investment-intensive pattern of growth. The rising working-age population also played a role, as the tendency of households to save during their prime working years led to the emergence of a large pool of savings that became available to fund investment. However, since the early 2010s, growth in investment has slowed and the contribution of investment to GDP growth has diminished (Graph 9). While growth in consumption has also moderated as household income growth has slowed, it has remained strong relative to investment growth, resulting in a gradual ‘rebalancing’ of GDP growth away from investment and towards consumption. The investment slowdown reflects a number of factors. Residential construction investment was one of the largest drivers of investment growth during the 2000s, contributing around half of total growth in investment. However, after a further boost from the government's stimulus response to the GFC, the share of residential investment in GDP has stabilised at around 17 per cent (Graph 10).[3] While urbanisation is still continuing, there is evidence that the supply of housing has outpaced the basic needs of the urban population; according to the China Household Finance Survey (2017), the residential vacancy rate in China was estimated at around 21 per cent in 2017, which is significantly higher than the vacancy rate in other Asian economies, the United States and Australia. Saturation in urban housing markets, particularly megacities such as Beijing and Shanghai, implies that future growth in residential investment is likely to come more from replacement or upgrading of older housing than from growth in the urban population. Such replacement or upgrading activity could, nonetheless, be substantial given households' changing aspirations for dwelling quality as their income rises. More generally, the boom in investment in the late 2000s that followed the government's stimulus response to the GFC happened at a time when growth was already slowing for structural reasons. This led to a sharp increase in the capital-to-output ratio, which has in turn lowered the marginal return on new capital spending. As a result, the marginal product of capital – that is, the returns to new investment – has declined, which is likely to have reduced the incentive of the private sector to invest (Graph 11).[4] The declining growth in the supply of labour and falling incentives to invest imply that, in the years ahead, the Chinese economy will increasingly have to rely on productivity improvements to sustain overall economic growth. Productivity growth, measured either in terms of labour productivity (i.e. output per worker) or total factor productivity (which accounts for the contribution of capital as well as labour input to output growth), grew rapidly over much of the period following the start of reforms in the late 1970s (Graph 12).[5] This was an important factor driving the sustained increase in per capita incomes over this period. The investment-intensive nature of Chinese growth ensured that total factor productivity growth has typically been much lower than growth in labour productivity. Alternative estimates of GDP, capital and labour give rise to a large variation in estimates of productivity growth (Wu 2011). Nonetheless, most measures indicate a pronounced acceleration in productivity in the mid 1980s, the early 1990s, and the late 1990s–mid 2000s, followed by more subdued growth thereafter. Roughly speaking, these ‘cycles’ in productivity growth have tended to coincide with or follow major periods of economic reform. In the latest decade, productivity growth has slowed as the benefits of earlier reforms have faded. ## Recent Trends Over the past few years, growth in China has continued to slow. Investment growth has weakened sharply, while consumption growth has moderated as growth in household income has slowed (Graph 13). Slower growth in domestic demand has weighed on imports. Growth in Chinese exports has also weakened as a result of the slowdown in advanced economies, a downturn in the global technology cycle and the escalation of the US–China technology and trade disputes in 2018–19. Slower growth in financing to the business sector over recent years has reinforced the structural forces that were already putting downward pressure on growth. China's total social financing (a measure of ‘broad credit’ that captures bank and non-bank financing to the real economy) has eased noticeably in the past two years, reflecting slowing growth in lending to businesses (Graph 14). While this may partly reflect weaker demand by the private sector, it also reflects the government's regulatory crackdown on riskier forms of non-bank, off-balance sheet financing that began in 2017. This type of lending grew very strongly in the wake of the 2008–09 stimulus, but more recently it has been falling as a result of the government's measures, which were designed to reduce vulnerabilities in the financial system. In response to the downward pressure on growth over the past year or so, the government has eased monetary and fiscal policy, although to date the stimulus has remained relatively targeted. Authorities have stressed that they will not resort to a ‘flood-like’ stimulus akin to the countercyclical policies enacted during the GFC (PBC 2019a), and have pledged not to attempt to boost growth by stimulating residential construction (Ministry of Finance of the PRC 2019). Instead, monetary policy easing by the People's Bank of China (PBC) has primarily taken the form of cuts to required reserve ratios (which mandate the share of deposits that banks must hold with the PBC) to increase the supply of funds available for lending. The PBC has also guided money market interest rates lower, and issued guidance to banks to increase lending to small businesses and reduce interest rates for these firms. Complementing these measures, the government has eased fiscal policy through cuts to value-added, corporate income and household income taxes and by specifying higher local government bond issuance quotas to fund increased public infrastructure investment. Expansionary fiscal policy resulted in a sharp widening in the budget deficit through the second half of 2018 and in 2019, which probably helped to buoy investment and retail sales in the second half of 2019. ## The Outlook for Growth The long-term structural headwinds arising from a slowing working-age population, reduced incentives to invest and subdued productivity growth suggest that Chinese growth will slow further in coming years. As a thought experiment, presented in Graph 15, we consider a growth scenario that extrapolates trends (estimated over the past 10 years) in the production-side ingredients of GDP growth: labour, capital and total factor productivity.[6] The results indicate that, if recent trends were to continue, it is possible that GDP growth could halve from current rates by 2030. International evidence reinforces the expectation that Chinese growth will continue to slow. For many years, China has experienced faster growth than nearly all other major economies. However, as argued by Pritchett and Summers (2013), the other extraordinary growth experiences of the past, such as the rise of Japan after World War II, and the rise of east Asian economies starting in the 1960s, were typically followed by periods of sharply lower growth. They propose that the most robust empirical finding about growth globally is ‘regression to the mean’ – namely, the tendency for economies experiencing ‘above-normal’ growth to revert to the global average. Lee (2017) and Barro (2016) have also argued, on the basis of separate empirical analyses of international data, that Chinese growth is likely to slow further, as income per capita in China converges up towards the levels enjoyed in advanced economies. While the decline in the working-age population, and hence the available labour supply, can be expected to place downward pressure on growth in the years ahead, the extent of decline could be affected by changes in household preferences and government policy. For example, assuming a ‘high’ fertility scenario used in projections by the United Nations, in which the Chinese birth rate rises and stabilises above 2.1 births per woman (considered necessary for replacement), the working-age population would fall at a slower rate and eventually increase in the second half of the current century (Lim and Cowling 2016; Graph 16).[7] However, for fertility to increase, Chinese households would have to reverse their growing preference for smaller families, which would be a dramatic shift given the transition from high to low fertility rates that has already happened. A more immediate increase in the working-age population could result from the government mandating increases in the retirement age. Assuming that the retirement age increases gradually from 60 to 65 between 2020 and 2035, the working-age population would initially increase, before resuming its downward trend. In other words, while increasing the retirement age would temporarily boost the available supply of labour, it would only delay, not prevent, the decline in the working-age population. Growth in investment could also be stronger than recent trends would suggest if the government were to support investment through systematically more expansionary fiscal and monetary policy. However, the targeted approach to policy easing taken to date, and the government's desire to avoid harming financial stability through excessive stimulus, suggest that, aside from attempting to smooth cyclical fluctuations, authorities are likely to accommodate a slowing trend growth trajectory. The staged lowering of GDP growth targets in recent years, and the leadership's greater emphasis on the ‘quality’ of growth rather than its speed (Li 2018, 2019) reduce the probability that the government will attempt to engineer dramatically stronger growth in investment in coming years. However, the change in emphasis from high-speed to high-quality growth does indicate a renewed focus on improving productivity growth over the longer term. The scenario presented in Graph 15 assumes continued low rates of productivity growth. It is difficult to forecast productivity because it depends on future technological progress and changes in government policy. There is also uncertainty about the starting point for productivity; some estimates suggests that Chinese productivity growth is weaker than official data suggest, and perhaps negative (Wu 2014; Feenstra, Inklaar and Timmer 2015). However, on any measure, there is still large scope for future productivity growth in China. For example, estimates that attempt to compare total factor productivity in individual countries to a ‘frontier’ economy (the United States) suggest that China remains significantly below the global productivity frontier, although data measurement issues mean that such comparisons are inevitably imprecise (Graph 17).[8] In recent years, the Chinese Government has implemented several initiatives to encourage faster productivity growth. These include allocating government funds to support innovation start-ups and boost spending on research and development (R&D), with a view to spurring technological innovation. Despite these efforts, growth in R&D spending has slowed from the rapid rates in the 2000s, and a high-frequency indicator of activity in high-value-added emerging industries (the Mastercard–Caixin–BBD New Economy Index) suggests that growth in innovative sectors has eased since 2017 (Graph 18). External pressures may also influence the pace of innovation in China in coming years. Recent measures taken by the United States to restrict Chinese foreign investment in US technology and telecommunications industries and prevent sales of American technology to Chinese companies could, if they persist, impede or slow technological progress in some Chinese industries.[9] However, such measures are also likely to intensify efforts already underway in China to achieve self-sufficiency in key technologies. Measures to boost technological innovation are only one aspect of the Chinese Government's efforts to boost productivity growth. In addition, the government has implemented a series of ‘supply-side structural reform’ policies. These have succeeded in reducing excess capacity in parts of heavy industry, which has improved the profitability and efficiency of parts of the corporate sector. The government has also continued to undertake SOE reforms, which have focused on strengthening the role of SOEs in the economy rather than supporting the more profitable private sector (Naughton 2018; Lardy 2019). While boosting productivity is high on the government's list of priorities, it remains to be seen whether the current mix of policies will be able to reverse recent trends. The prospect of growth continuing its slowing trajectory, largely for structural reasons, poses challenges for economic policy in China. The fact that nominal GDP growth was strong throughout the reform era allowed rising levels of debt to be matched by rising incomes. Combined with a cautious approach to the sequencing of financial reforms, and relatively low levels of foreign-currency denominated debt, this helped China avoid the chronic financial instability encountered by many other emerging economies in this transition phase. However, the investment-intensive (and largely debt-funded) pattern of growth since the GFC, combined with the structural slowing in growth, has seen the debt-to-GDP ratio rise sharply in the past decade, presenting risks to financial stability (Graph 19). These risks relate not only to the high levels of debt, but also to broader financial vulnerabilities stemming from off-balance sheet lending and concerns about the quality of the debt issued. Declining nominal GDP growth means that growth in debt must also slow to prevent the debt ratio from rising further. Accordingly, current policy seeks to keep total social financing growth in line with nominal GDP growth (PBC 2019b). Since the early 2010s, there has been a rise in episodes of financial instability, including a disruption to the interbank market in 2013 and a collapse in stock prices in 2015. While these issues were themselves partly driven by earlier policy changes, they were prevented from causing more systemic problems by rapid policy responses once the risks were recognised. Regulatory reforms since 2017 have also been effective at slowing the corporate sector's accumulation of debt, thereby lowering the risk of a large-scale systemic financial disruption or crisis. Even so, the level remains high and household and government debt continue to rise. In this context, the government must strike a delicate balance between stimulating the economy enough to support overall GDP growth, and stimulating it too much via excessive growth in credit, leading to even higher levels of debt, and adding to financial vulnerabilities. ## Conclusion China's emergence as one of the largest and fastest-growing economies in the world, beginning in the late 1970s, followed decades of economic volatility and social and political turmoil. The comparatively benign growth trajectory charted through the period of economic reforms was underpinned by rapid industrialisation, steady rural-urban migration, a rising working-age population, an increased role for the private sector, strong growth in residential investment and productivity-enhancing reforms. However, the reversal or slowing of many of these impulses suggests that China's period of ‘above-normal’ growth is drawing to a close. This will create challenges for policymakers, as they attempt to foster continued increases in incomes, while forestalling risks arising from high levels of debt. How the authorities navigate that trajectory will have significant implications for China's major trading partners, including Australia, in the years ahead. ## Footnotes The authors are in Economic Analysis department [*] Perkins (1964) estimates that cooperatives had an average size of 200 families, while communes comprised 4,000–5,000 families. [1] Targets were implemented for a much smaller number of commodities in the PRC than was the case in the Soviet Union (Naughton 2007, p 62). In practice, though, even the more detailed targets in the Soviet Union were rarely met and constantly revised (Gregory 2003). Thus, despite the differences between the Chinese and Soviet models of central planning, they encountered similar problems. [2] Residential investment in Graph 10 is estimated using a slightly modified version of the method in Koen et al (2013). [3] See Ma, Roberts and Kelly (2017) for further discussion of this issue. These estimates are based on official GDP and investment data. The capital stock is calculated using the perpetual inventory method, initialised at the 1952 level estimated by Wu (2014), and excludes residential investment. [4] These estimates are Törnqvist indices based on official GDP, investment (gross fixed capital formation, excluding residential investment) and employment data, and time-varying weights (labour and capital income shares). The labour share and capital shares are adjusted for taxes on production and are estimated using data from the official Flow of Funds (physical transaction) accounts, published by the National Bureau of Statistics of China. Labour input is adjusted for quality using data on average years of schooling derived from Barro and Lee (2001), Cohen and Soto (2007) and the UNDP (2018). [5] The calculation of trend GDP growth ( ${y}^{T}$ ) is: ${y}^{T}={a}^{T}+\alpha {l}^{T}+\left(1-\alpha \right){k}^{T}$ , where ${a}^{T}$ is trend growth in total factor productivity, $\alpha$ is the labour share of income, ${l}^{T}$ is trend growth in labour, adjusted for quality (average years of schooling) and ${k}^{T}$ is trend growth in the capital stock. Linear trends are estimated by regressing each variable on a time trend. The calculation of trend growth in labour input assumes that employment grows at the same rate as United Nations projections of the working-age population and that average years of schooling follow their 10-year linear trend. [6] Graph 16 updates the scenarios from Lim and Cowling (2016) for recent data. [7] Cross-country comparisons of productivity yield divergent results depending on the approach and underlying assumptions. In light of the uncertainty around such estimates, Graph 17 averages calculations from three different methods and shows maximum and minimum estimates in each case. The measures correspond to input- and output-oriented data envelopment analysis models (Charnes, Cooper and Rhodes 1978) and independent estimates of total factor productivity across countries, at current purchasing power parity rates, compiled by Feenstra, Inklaar and Timmer (2015). The first two measures are estimated using output, capital stock and employment data from the Penn World Table 9.1 for G20 countries, using output estimates that impose transitivity in multilateral comparisons. [8] The policies are documented by US Department of the Treasury (2018) and Federal Register (2018). [9] ## References Barro RJ (2016), ‘Economic Growth and Convergence, Applied to China’, China & World Economy, 24(5), pp 5–19. Barro RJ and J-W Lee (2001), ‘International Data on Educational Attainment: Updates and Implications’, Oxford Economic Papers, 3, pp 541–563. Brandt L and X Zhu (2000), ‘Redistribution in a Decentralized Economy: Growth and Inflation in China under Reform’, Journal of Political Economy, 108(2), pp 422–439. Cai F (2018), ‘How has the Chinese Economy Capitalised on the Demographic Dividend During the Reform Period?’ in Garnaut R, L Song and F Cai (eds) China's 40 Years of Reform and Development, 1978–2018, ANU Press, Canberra, pp 235–256. Charnes, AW, W Cooper and E Rhodes (1978), ‘Measuring the Efficiency of Decision Making Units’, European Journal of Operational Research 2, pp 429–444. China Household Finance Survey (2017), Southwestern University of Finance and Economics. Available at: <http://www.chfsdata.org/>. Chinn DL (1980), ‘Basic Commodity Distribution in the People's Republic of China’, The China Quarterly, 84, pp 744–754. Cohen D and M Soto (2007), ‘Growth and Human Capital: Good Data, Good Results’, Journal of Economic Growth, 12, pp 51–76. Federal Register (2018), ‘Addition of Certain Entities; and Modifications of Entry on the Entity List’, A Rule by the Industry and Security Bureau, 1 August, available at: <https://www.federalregister.gov/documents/2018/08/01/2018-16474/addition-of-certain-entities-and-modification-of-entry-on-the-entity-list/>. Feenstra, RC, R Inklaar and MP Timmer (2015), ‘The Next Generation of the Penn World Table’, American Economic Review, 105(10), 3150-3182, available for download at www.ggdc.net/pwt Gan J (2008), ‘Privatization in China: Experiences and Lessons’, in Barth J, J Tatom and G Yago (eds) China's Emerging Financial Markets: Challenges and Opportunities, Springer, Boston. Garnaut R and G Ma (1996), ‘China's Grain Demand: Recent Experience and Prospects to the year 2000’, in Garnaut R, S Guo and G Ma (eds), The Third Revolution in the Chinese Countryside, Cambridge University Press, New York. Gregory PR (2003), The Political Economy of Stalinism: Evidence from the Soviet Secret Archives, Cambridge University Press, New York. Koen V, R Herd, X Wang and T Chalaux (2013), ‘Policies for Inclusive Urbanisation in China’, OECD Economics Department Working Papers No 1090. Lardy N (1998), China's Unfinished Economic Revolution, Brookings Institution Press, Washington DC. Lardy N (2019), The State Strikes Back: The End of Economic Reform in China?, Peterson Institute for International Economics, Washington DC. Lee J-W (2017), ‘China's Economic Growth and Convergence’, The World Economy, 40, pp 2455–2474. Li K (2018), ‘Report on the Work of the Government’, Delivered at the Second Session of the 13th National People's Congress of the People's Republic of China, 5 March, Beijing. Li K (2019), ‘Report on the Work of the Government’, Delivered at the First Session of the 13th National People's Congress of the People's Republic of China, 5 March, Beijing. Lim J and A Cowling (2016), ‘China's Demographic Outlook’, RBA Bulletin, June, pp 35–42. Ma G, I Roberts and G Kelly (2017), ‘Rebalancing China's Economy: Domestic and International Implications’, China & World Economy, 25(1), pp 1–31. Ministry of Finance of the People' Republic of China (2019), ‘Press Conference of the Ministry of Finance, PRC’ [in Chinese], 6 September. Available at <http://www.mof.gov.cn/zhengwuxinxi/caizhengxinwen/201909/t20190906_3382239.htm?mc_cid=eb2b199651&mc_eid=6fd6d48cf4>. Naughton B (2007), The Chinese Economy: Transitions and Growth, The MIT Press, Cambridge, Massachusetts. Naughton B (2018), ‘State Enterprise Reform Today’, in Garnaut R, L Song and F Cai (eds), China's 40 Years of Reform and Development, 1978–2018, ANU Press, Canberra, pp 375–394. Oppers SE (1997), ‘Macroeconomic Cycles in China’, IMF Working Paper WP/97/135, International Monetary Fund, Washington DC. People Bank of China (PBC) (2019a), ‘Taking the New Development Concept as Guidance, and Promoting Steady, Healthy and Sustainable Development of the Chinese Economy’ Press Conference Transcript [in Chinese], 24 September, available at: <http://www.pbc.gov.cn/goutongjiaoliu/113456/113469/3895219/index.html>. People's Bank of China (PBC) (2019b), ‘Second Quarter Monetary Policy Implementation Report’ [in Chinese], 9 August, available at: <http://www.pbc.gov.cn/goutongjiaoliu/113456/113469/3872965/index.html>. Perkins DH (1964) ‘Centralization and Decentralization in Mainland China's Agriculture, 1949–1962’, Quarterly Journal of Economics, Vol LXXVIII, pp 208–237. Pritchett L and LH Summers (2013), ‘Asia-phoria Meets Regression to the Mean’, Proceedings, Federal Reserve Bank of San Francisco, November, pp 1–35. United Nations Development Programme (UNDP) (2018), Human Development Indices and Indicators: 2018 Statistical Update, United Nations, New York, available at: <http://hdr.undp.org/sites/default/files/2018_human_development_statistical_update.pdf>. US Department of the Treasury (2018), ‘Q&A: Interim Regulations for FIRRMA Pilot Program’, Office of Public Affairs, 10 October, available at: <https://home.treasury.gov/system/files/206/QA-FIRRMA-Pilot-Program.pdf>. Wu Y (2011), ‘Total Factor Productivity Growth in China: A Review’, Journal of Chinese Economic and Business Studies, 9(2), pp 111–126. Wu, HX (2014), ‘China's Growth and Productivity Performance Debate Revisited – Accounting for China's Sources of Growth with a New Dataset’, January, The Conference Board Economic Program Working Paper #14–01, New York. Wu J (2019), ‘Soul Searching on China's 70-Year Economic Evolution’, Caixin Global, 14 October. Zhu X (2012), ‘Understanding China's Growth: Past, Present and Future’, Journal of Economic Perspectives, 26(4), pp 103–124.
2022-11-28 17:35:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2250707894563675, "perplexity": 5909.891380527822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710534.53/warc/CC-MAIN-20221128171516-20221128201516-00035.warc.gz"}
http://www.math.princeton.edu/events/seminars/349/archive/2011-2012
# Seminars & Events for PACM/Applied Mathematics Colloquium September 26, 2011 4:30pm - 6:30pm ##### Understanding 3D Shapes Jointly ###### PACM/Applied Mathematics Colloquium The use of 3D models in our economy and life is becoming more prevalent, in applications ranging from design and custom manufacturing, to prosthetics and rehabilitation, to games and entertainment. Although the large-scale creation of 3D content remains a challenging problem, there has been much recent progress in design software tools, like Google SketchUp for buildings or Spore for creatures, or in low cost 3D acquisition hardware, like the Microsoft Kinect scanner. As a result, large commercial 3D shape libraries, such as the Google 3D Warehouse, already contain millions of models. These libraries, however, can be unwieldy, when the need arises to efficiently incorporate models into various workflows. Speaker: Leonidas Guibas, Stanford University Location: Fine Hall 214 October 3, 2011 4:30pm - 6:30pm ##### Complexity theory applied to voting theory ###### PACM/Applied Mathematics Colloquium As it will be shown with results and examples, the paradoxes associated with standard voting rules are surprisingly likely and are so complex that one must worry about the legitimacy of election outcomes. To extract an understanding of what can happen and why, it is shown how lessons from complexity theory, where complicated behavior is due to a combination of simple interactions, explain many mysteries both in this area and for related topics such as nonparametric statistics, etc. Indeed, all paradoxes of standard rules, including Arrow's seminal "Impossibility Theorem," reflect simple but hidden symmetry structures connecting the preferences of voters. Speaker: Don Saari, University of California, Irvine Location: Fine Hall 214 October 10, 2011 4:00pm - 6:00pm ##### A new model for self-organized dynamics: From particle to hydrodynamic descriptions ###### PACM/Applied Mathematics Colloquium Self-organized dynamics is driven by "rules of engagement" which describe how each agent interacts with its neighbors. They consist of long-term attraction, mid-range alignment and short-range repulsion. Many self-propelled models are driven by the balance between these three forces, which yield emerging structures of interest. Examples range from consensus of voters and traffic flows to the formation of flocks of birds or school of fish, tumor growth etc. We introduce a new particle-based model, driven by self-alignment, which addresses several drawbacks of existing models for self-organized dynamics. The model is independent of the number of agents: only their geometry in phase space is involved. Speaker: Eitan Tadmor, University of Maryland Location: Fine Hall 214 October 17, 2011 4:30pm - 6:30pm ##### Optimization of Polynomial Roots, Eigenvalues and Pseudospectra ###### PACM/Applied Mathematics Colloquium The root radius and root abscissa of a monic polynomial are respectively the maximum modulus and the maximum real part of its roots; both these functions are nonconvex and are non-Lipschitz near polynomials with multiple roots. We begin the talk by giving constructive methods for efficiently minimizing these nonconvex functions in the case that there is just one affine constraint on the polynomial's coefficients. We then turn to the spectral radius and spectral abscissa functions of a matrix, which are analogously defined in terms of eigenvalues. We explain how to use nonsmooth optimization methods to find local minimizers and how to use nonsmooth analysis to study local optimality conditions for these nonconvex, non-Lipschitz functions. Speaker: Michael L. Overton, Courant Institute for Mathematics and New York University Location: Fine Hall 214 October 24, 2011 2:30pm - 3:30pm ##### On queues and numbers ###### PACM/Applied Mathematics Colloquium We will show that certain symmetries which have traditionally played an important role in number theory are also important for analyzing certain simple queueing systems. This connection between number theory and queueing theory leads to some interesting questions in number theory and also helps understand results from several queueing theory papers. We will demonstrate the connection by examining the problem of managing a mini-market with an express line queue. If time permits we will explain how the management problem in a supermarket setting relates to space-time geometry. The talk will be self contained, in particular, no experience in managing mini-markets will be assumed. Speaker: Eitan Bachmat, BGU Location: Fine Hall 314 October 24, 2011 4:30pm - 6:30pm ##### Existence and regularity for a class of degenerate diffusions arising in population genetics ###### PACM/Applied Mathematics Colloquium Joint PACM Colloquium and Analysis Seminar Speaker: Charles Epstein, University of Pennsylvania Location: Fine Hall 214 November 7, 2011 4:30pm - 6:30pm ##### The mathematics of desertification: searching for early warning signals ###### PACM/Applied Mathematics Colloquium The process of desertification can be modeled by systems of reaction-diffusion equations. Numerical simulations of these models agree remarkably well with field observations: both show that 'vegetation patterns'—i.e. regions in which the vegetation only survives in localized 'patches'—naturally appear as the transition between a healthy homogeneously vegetated state and the (non-vegetated) desert state. Desertification is a catastrophic and non-reversible event during which huge patterned vegetation areas 'collapse' into the desert state at a fast time scale—for instance as a consequence of a slow decrease of yearly rainfall, or through an increased grazing pressure. Speaker: Arjen Doelman, University of Leiden / Lorentz Center Location: Fine Hall 214 November 21, 2011 4:30pm - 6:30pm ##### Prolates on the sphere, Extensions and Applications: Slepian functions for geophysical and cosmological signal estimation and spectral analysis ###### PACM/Applied Mathematics Colloquium Functions that are timelimited (or spacelimited) cannot be simultaneously bandlimited (in frequency). Yet the finite precision of measurement and computation unavoidably bandlimits our observation and modeling scientific data, and we often only have access to, or are only interested in, a study area that is temporally or spatially bounded. In the geosciences we may be interested in spectrally modeling a time series defined only on a certain interval, or we may want to characterize a specific geographical area observed using an effectively bandlimited measurement device. In cosmology we may wish to compute the power spectral density of the cosmic microwave background radiation without the contaminating effect of the galactic plane. Speaker: Frederik Simons, Princeton University Location: Fine Hall 214 November 28, 2011 4:30pm - 5:30pm ##### Sharp Thresholds in Statistical Estimation ###### PACM/Applied Mathematics Colloquium Sharp thresholds are ubiquitous high-dimensional combinatorial structures. The oldest example is probably the sudden emergence of the giant component in random graphs, first discovered by Erdos an Renyi. More recently, threshold phenomena have started to play an important role in some statistical learning and statistical signal processing problems, in part because of the interest in 'compressed sensing'. The basic setting is one in which a large number of noisy observations of a high-dimensional object are made. As the ratio of the number of observations to the number of `hidden dimensions' crosses a threshold, our ability to reconstruct the object increases dramatically. I will discuss several examples of this phenomenon, and some algorithmic and mathematical ideas that allow to characterize these threshold phenomena. Speaker: Andrea Montanari, Stanford University Location: Fine Hall 214 December 5, 2011 4:30pm - 5:30pm ##### Nonlocal Evolution Equations ###### PACM/Applied Mathematics Colloquium Nonlocal evolution equations have been around for a long time, but in recent years there have been some nice new developments. The presence of nonlocal terms might originate from modeling physical, biological or social phenomena (incompressibility, Ekman pumping, chemotaxis, micro-micro interactions in complex fluids, collective behavior in social aggregation) or simply from inverting local operators in the analysis of systems of PDE. I will brifly present some regularity results for hydrodynamic models with singular constitutive laws. The main part of the talk will present a nonlinear maximum principle for linear nonlocal dissipative operators and applications. Speaker: Peter Constantin, Princeton University Location: Fine Hall 214 February 6, 2012 4:30pm - 5:30pm ##### Graph Gauge Theory and Vector Diffusion Maps ###### PACM/Applied Mathematics Colloquium We consider a generalization of graph Laplacian which acts on the space of functions which assign to each vertex a point in $d$-dimensional space. The eigenvalues of such connection Laplacian are useful for examining vibrational spectra of molecules as well as vector diffusion maps for analyzing high dimensional data. We will discuss algebraic, probabilistic and algorithmic methods in the study of the connection spectra. For example, if the graph is highly symmetric and the connection Laplacian is invariant under the symmetry of the graph, then its eigenvalues can be deduced by using irreducible representations. In addition, by using matrix concentration inequalities, the eigenvalues of random connection Laplacians can be approximated by the eigenvalues of the expected matrices under appropriate conditions. Speaker: Fan Chung, University of California Location: Fine Hall 214 February 13, 2012 4:30pm - 5:30pm ##### Computability and Complexity of Julia Sets ###### PACM/Applied Mathematics Colloquium Studying dynamical systems is key to understanding a wide range of phenomena ranging from planetary movement to climate patterns to market dynamics. Various computational and numerical tools have been developed to address specific questions about dynamical systems, such as predicting the weather or planning the trajectory of a satellite. However, the theory of computation behind these problems appears to be very difficult to develop. In fact, little is known about computability of even the most natural problems arising from dynamical systems. In this talk I will survey the recent study of the computational properties of dynamical systems that arise from iterating quadratic polynomials on the complex plane. These give rise to the amazing variety of fractals known as Julia sets, and are closely connected to the Mandelbrot set. Speaker: Mark Braverman, Princeton University, Computer Science Location: Fine Hall 214 February 20, 2012 4:30pm - 5:30pm ##### A Random Walk on Image Patches ###### PACM/Applied Mathematics Colloquium Algorithms that analyze patches extracted from time series or images have led to state-of-the art techniques for classification, denoising, and the study of nonlinear dynamics. In the first part of the talk we describe two examples of such algorithms: a novel method to estimate the arrival-times of seismic waves from a seismogram, and a new patch-based method to denoise images. Both approaches combines the following two ingredients: the signals (times series or images) are first lifted into a high-dimensional space using time/space-delay embedding; the resulting phase space is then parametrized using a nonlinear method based on the eigenvectors of the graph Laplacian. Both algorithms outperform existing gold standards. Speaker: Francois Meyer, University of Colorado Location: Fine Hall 214 February 27, 2012 4:30pm - 5:30pm ##### Dimension Reduction, Coarse-Graining and Data Assimilation in High-Dimensional Dynamical Systems ###### PACM/Applied Mathematics Colloquium Modern computing technologies, such as massively parallel simulation, special-purpose high-performance computers, and high-performance GPUs permit to simulate complex high-dimensional dynamical systems and generate time-series in amounts too large to be grasped by traditional “look and see” analyses. This calls for robust and automated methods to extract the essential structural and dynamical properties from these data in a manner that is little or not depending on human subjectivity. To this end, a decade of work has led to the development of analysis techniques which rely on the partitioning of the conformation space into discrete substates and reduce the dynamics to transitions between these states. Speaker: Eric Vanden-Eijnden, Courant NYU Location: Fine Hall 214 March 5, 2012 4:30pm - 5:30pm ##### Topological Landscape of Networks ###### PACM/Applied Mathematics Colloquium We will discuss how one can endow a network with a landscape in a very simple and natural way. Critical point analysis is introduced for functions defined on networks. The concept of local minima/maxima and saddle points of different indices are defined, by extending the notion of gradient flows and minimum energy path to the network setting. Persistent homology is used to design efficient numerical algorithms for performing such analysis. Applications to some examples of social and biological networks (LAO protein binding network) are demonstrated. These examples show that the critical nodes play important roles in the structure and dynamics of such networks. This is a joint work with Weinan E and Jianfeng Lu. Speaker: Yuan Yao, Peking University Location: Fine Hall 214 March 26, 2012 4:30pm - 5:30pm ##### Geometry and Topology in Dimension Reduction ###### PACM/Applied Mathematics Colloquium In the first part of the talk we describe how learning the gradient of a regression function can be used for supervised dimension reduction (SDR). We provide an algorithm for learning gradients in high-dimensional data, provide theoretical guarantees for the algorithm, and provide a statistical interpretation. Comparisons to other methods on real and simulated data are presented. In the second part of the talk we present preliminary results on using the Laplacian on forms for dimension reduction. This involves understanding higher-order versions of the isoperimetric inequality for both manfifolds and abstract simplicial complexes. Speaker: Sayan Mukherjee, Duke University Location: Fine Hall 214 April 2, 2012 4:30pm - 5:30pm ##### Mathematics of the Human Brain Connectome ###### PACM/Applied Mathematics Colloquium The human brain connectome is an ambitious project to provide a complete map of neural connectivity and a recent source of excitement in the neuroscience community. Just as the human genome is a triumph of marrying technology (high throughput sequencers) with theory (dynamic programming for sequence alignment), the human connectome is a result of a similar union. The technology in question is that of diffusion magnetic resonance imaging (dMRI) while the requisite theory, we shall argue, comes from three areas: PDE, harmonic analysis, and algebraic geometry. The underlying mathematical model in dMRI is the Bloch-Torrey PDE but we will approach the 3-dimensional imaging problem directly. Speaker: Lek-Heng Lim, University of Chicago Location: Fine Hall 214 April 9, 2012 4:30pm - 5:30pm ##### Optimal Phase Transitions in Compressed Sensing ###### PACM/Applied Mathematics Colloquium "Compressed Sensing" is an active research area which touches on harmonic analysis, geometric functional analysis, applied mathematics, computer science electrical engineering and information theory. Concrete achievements, such as speeding up pediatric MRI acquistion times from several minutes to under a minute, are now entering daily use. In my talk I will discuss the notion of phase transitions in combinatorial geometry, describe how they precisely demarcate the situations where a popular algorithm in compressed sensing -- ell_1 minimization -- can succeed. Then I will discuss the issue: what is the best possible phase transition of any algorithm? We get different answers depending on the assumptions we make. Speaker: Prof. David Donoho, Stanford University Location: Fine Hall 214 April 16, 2012 4:30pm - 5:30pm ##### Special PACM Student Colloquium! ###### PACM/Applied Mathematics Colloquium Dustin Mixon - "Phaseless recovery with polarization": In many applications, an unknown vector is measured according to the magnitude of its inner product with some known vector. It is desirable to design an ensemble of vectors for which any unknown vector can be recovered from such measurements (up to a global phase factor). In 2006, Balan et al. demonstrated that this measurement process is injective for generic M-dimensional vector ensembles of size at least 4M-2. Recently, Candes et al. used semidefinite programming to stably reconstruct from measurements with random ensembles of size O(MlogM). In this talk, we use the polarization identity and expander graphs to efficiently recover from measurements with specific deterministic ensembles of size O(M). Speaker: Dustin Mixon, Afonso Bandeira, Princeton University Location: Fine Hall 214 April 23, 2012 4:30pm - 5:30pm ##### Super-resolution via sparse recovery: progress and challenges ###### PACM/Applied Mathematics Colloquium From the knowledge of a function in a frequency band, super-resolution consists in detecting or estimating sharp features which are less than the inverse of a bandwidth apart from one another. Sparse recovery is one way to extend this Shannon-Nyquist scaling, but "by how much" and "in which setting" it is not yet clearly understood. This work attempts to start the classification of singularity layout vs. noise level for proper identification by ell-1 minimization. When a condition of constructive interference is met, ell1-minimization performs optimally: it only breaks down in the unrecoverable regime where no other method would work either. As a corollary, we obtain a novel noise-dependent scaling which replaces the inverse bandwidth rule for super-resolution. Speaker: Laurent Demanet, MIT Location: Fine Hall 214
2017-09-25 00:54:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4364011883735657, "perplexity": 1732.192736018498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690268.19/warc/CC-MAIN-20170925002843-20170925022843-00152.warc.gz"}
https://byjus.com/function-formulas
# Function Formulas Function defines the relation between the input and the output. Function Formulas are used to calculate x-intercept, y-intercept and slope in any function. For a quadratic function you could also calculate its vertex. Also the function can be plotted in a graph for different values of x. The x-intercept of a function is calculated by substituting the value of f(x) as zero. Similarly, the y-intercept of a function is calculated by substituting the value of x as zero. The slope of a linear function is calculated by rearranging the equation to its general form, f(x) = mx + c; where m is the slope. The vertex of a quadratic function is calculated by rearranging the equation to its general form, f(x) = a(x – h)2 + k; where (h,k) is the vertex. ## Function Problems Some solved problems on functions are given below: ### Solved Examples Question 1: Calculate the slope, x-intercept and y-intercept of a linear equation, f(x) = 5x + 4 ? Solution: Given, f(x) = 5x + 4 The general form of a linear equation is, f(x) = mx + c So, Slope = m = 5 Substitute f(x) = 0, 0 = 5x + 4 5x = -4 x = $\frac{-4}{5}$ The x-intercept is ($\frac{-4}{5}$, 0) Substitute x = 0, f(x) = 5(0) + 4 f(x) = 0 + 4 f(x) = 4 The y-intercept is (0,4) Question 2: Calculate the vertex, x-intercept and y-intercept of a quadratic equation, f(x) = x2 – 6x + 4 ? Solution: Given, f(x) = x2 – 6x + 4 f(x) = (x2 – 6x + 9) – 5 f(x) = (x – 3)2 – 5 The general form of a linear equation is, f(x) = (x – h)2 + k So, Vertex = (h,k) = (3,-5) Substitute f(x) = 0, 0 = x2 – 6x + 4 x2 – 6x + 4 = 0 x = 6 ± $\frac{\sqrt{(-6)^{2}-4(1)(4)}}{2(1)}$ x = 6 ± $\frac{\sqrt{36-16}}{2}$ x = 6 ± $\frac{\sqrt{20}}{2}$ x = 6 ± $\frac{2\sqrt{5}}{2}$ x = 6 ± $\sqrt{5}$ The given quadratic function has two x-intercepts. The x-intercepts are (6 – $\sqrt{5}$, 0) and (6 + $\sqrt{5}$, 0) Substitute x = 0, f(x) = (0)2 – 6(0) + 4 f(x) = 0 + 0 + 4 f(x) = 4 The y-intercept is (0,4) More topics in Function Formula Average Rate of Change Formula Simpson’s Rule Formula Linear Approximation Formula Quadratic Function Formula Linear Function Formula Inverse Function Formula Maclaurin Series Formula
2018-07-21 15:12:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7967990636825562, "perplexity": 714.5711586526986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592636.68/warc/CC-MAIN-20180721145209-20180721165209-00359.warc.gz"}
https://ask.sagemath.org/questions/7880/revisions/
# Revision history [back] ### How to rearrange terms in an expression I'm totally new to Sage. I was hoping I could use it for simple and complex application problems I run into at work. For example the relationship between flow rate, Q, and pressure drop, dP, for flow of a powerlaw fluid (m, n) in a cylindrical tube (L, R) is: Q = (piR^3/((1/n)+3))(Tau/m)^(1/n) So I want to rearrange this expression for dP in terms of Q, m, n, L, R so I can then compute a table of results, plot dP versus Q etc. Now of course I can do the rearrangement by hand with pen and paper (or in my head on better days) but I was hoping that using the Sage notebook I could work through the rearrangement and thus have an digital record of the transformation so that others can follow along. However in all the hours of reading and watching tutorials, I have yet to see this simple process in action. So far in my sage book I have: Q, L, R, m, n, dP = var("Q, L, R, m, n, dP") Tau = dP/(2*L/R) Q = (piR3/((1/n)+3))(Tau/m)(1/n) (Note that the preview doesn't show the power symbol **, hope it appears in the posted question) Is there a method, say, Q.rearrange(dP) that yields dP = f(Q, L, R, m, n)? If not, how do I write the intermediate steps in sage speak to give this expression? Sorry if this is such a basic question. 2 No.2 Revision Mike Hansen 4068 ●24 ●48 ●89 ### How to rearrange terms in an expression I'm totally new to Sage. I was hoping I could use it for simple and complex application problems I run into at work. For example the relationship between flow rate, Q, and pressure drop, dP, for flow of a powerlaw fluid (m, n) in a cylindrical tube (L, R) is: Q = (piR^3/((1/n)+3))(Tau/m)^(1/n)(pi*R^3/((1/n)+3))*(Tau/m)^(1/n) So I want to rearrange this expression for dP in terms of Q, m, n, L, R so I can then compute a table of results, plot dP versus Q etc. Now of course I can do the rearrangement by hand with pen and paper (or in my head on better days) but I was hoping that using the Sage notebook I could work through the rearrangement and thus have an digital record of the transformation so that others can follow along. However in all the hours of reading and watching tutorials, I have yet to see this simple process in action. So far in my sage book I have: Q, L, R, m, n, dP = var("Q, L, R, m, n, dP") dP") Tau = dP/(2*L/R) dP/(2*L/R) Q = (piR3/((1/n)+3))(Tau/m)(1/n)(pi*R**3/((1/n)+3))*(Tau/m)\*\*(1/n) (Note that the preview doesn't show the power symbol **, hope it appears in the posted question) Is there a method, say, Q.rearrange(dP) that yields dP = f(Q, L, R, m, n)? If not, how do I write the intermediate steps in sage speak to give this expression? Sorry if this is such a basic question. 3 No.3 Revision Mike Hansen 4068 ●24 ●48 ●89 ### How to rearrange terms in an expression I'm totally new to Sage. I was hoping I could use it for simple and complex application problems I run into at work. For example the relationship between flow rate, Q, and pressure drop, dP, for flow of a powerlaw fluid (m, n) in a cylindrical tube (L, R) is: Q = (pi*R^3/((1/n)+3))*(Tau/m)^(1/n) So I want to rearrange this expression for dP in terms of Q, m, n, L, R so I can then compute a table of results, plot dP versus Q etc. Now of course I can do the rearrangement by hand with pen and paper (or in my head on better days) but I was hoping that using the Sage notebook I could work through the rearrangement and thus have an digital record of the transformation so that others can follow along. However in all the hours of reading and watching tutorials, I have yet to see this simple process in action. So far in my sage book I have: Q, L, R, m, n, dP = var("Q, L, R, m, n, dP") Tau = dP/(2*L/R) Q = (pi*R**3/((1/n)+3))*(Tau/m)\*\*(1/n) (pi*R**3/((1/n)+3))*(Tau/m)**(1/n) (Note that the preview doesn't show the power symbol **, hope it appears in the posted question) Is there a method, say, Q.rearrange(dP) that yields dP = f(Q, L, R, m, n)? If not, how do I write the intermediate steps in sage speak to give this expression? Sorry if this is such a basic question.
2022-10-02 13:13:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6240928173065186, "perplexity": 972.6570700576301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00747.warc.gz"}
https://www.seliga.pl/en/stomatology/microscopes/smartoptic
High-end microscope at a price of the middle class one. Optical performance and usability of these microscopes allow SmartOPTIC successfully compete even with excellent German brand microscopes with a long tradition. It is a device often and willingly chosen by dentists in Poland. The sum of the functional characteristics of the SmartOPTIC microscope gives a product that many call "optimal" having regard to ratio of quality to the price. The microscope SmartOPTIC modular design allows any configuration suited to the expectations of the operator. By default, the microscope is available in two basic configurations: • SmartOPTIC BASIC with fixed binocular 45 • SmartOPTIC ERGO with inclinable binocular in the range 0-195 Below the summary of the most important features of the microscope and its technical details. ## Binoculars Equipped with a wide angle oculars with independently adjustable diopters for each eye. Flexible eye sockets allowthe user  to work in your own glasses. Depending on the version of the microscope the binocular can be  fixed or inclinable. SmartOPTIC BASIC: fixed binocular 45° SmartOPTIC ERGO: inclinalbe binocular 0-195° ## High-quality optics, five degrees of magnification The applied optics provides a full range of magnification: from the smallest (x1, 3) to the largest (x 27) depending on the configuration. See Table of magnifications. ## LED light source as a standard The light source is a modern80W LED , which gives a very bright, white light. Its' service life is planned for 50 000 hours. ## 3 years guarantee The reliability confirmed with the 3 years guarantee. ## A wide selection of accessories SmartOPTIC microscope can be fitted with a range of accessories, such as inclinable binocular( 0-195 degrees), the arm extension, binocular extender, etc. See more: Accessories ## Accurate focus setting Built-in lens fine focus knob allows precise focusing. Essential when using high magnifications. ## Many options of mounting Available versions: wall, ceiling, floor, table and mobile on wheels. ## Various types of vision tracks Possibility of video track installation for DSLR cameras, SLR cameras, dedicated cameras and popular Handycam. See more: Imaging ## Available filters Depending on needs, the microscope may be equipped with a filters: orange, green, or polarization. ## Compact structure, aesthetic and modern design Design gives complete freedom of movement, even in small offices. Basic configuration of the SmartOPTIC microscope consists the following elements:   Binocular: fixed angle of 45º; wide-angle; oculars with independent diopter adjustment; Objective lens: 250mm Light source: LED; efficient fiber optic cable; Filter: orange or green; Arm length: extended (90cm); Detailed information on the individual components can be found under Accessories The modular design allows the microscope to equip it with various components depending on your needs.   Binoculars: fixed angle of 45º; fixed angle of 90º; with adjustable tilt angle of 0-195º; Objective lenses: 200mm; 250mm; 300mm; 400mm; Lightning system: halogen 3000K 37klux; LED 6000K 50klux; Xenon 6000K 105klux; efficient optical fiber; Filters: green; orange; laser filter; polarizing filter; Mounting options: ceiling; wall; floor; table; mobile on wheels; Arm of the microscope: standard 60cm; extended 90cm; additional extension of 28cm; Imaging: dedicated camera with beamsplitter; single beamsplitter; double beamsplitter; adapter for DSLR camera; adapter for camcorder type of camera"; adapter for Sony Nex camera; adapter for CCD camera; LCD mast; Equipment: binocular extender; rotation ring; LCD mast; shelf for printer; autoclavable handle covers; Detailed information on the individual components can be found under Accessories Wall Ceiling Mobile on wheels Table Magnifications Table for the SmartOPTIC microscope . The table takes into account the magnification of the microscope for various configurations and installed components: lens (200mm, 250mm, 300mm, 400mm), oculars (x10, x12, 5), binocular (inclined- F135, inclinable- F170). objective: 200mm 250mm 300mm 400mm binocular: inlined inclinable inlined inclinable inlined inclinable inlined inclinable ocular knob position f135 f170 f135 f170 f135 f170 f135 f170 10x/16mm 0,4 2,70 3,40 2,16 2,72 1,80 2,27 1,35 1,70 10x/16mm 0,6 4,05 5,10 3,24 4,08 2,70 3,40 2,02 2,55 10x/16mm 1,0 6,75 8,50 5,40 6,80 4,50 5,66 3,37 4,25 10x/16mm 1,6 10,80 13,60 8,64 10,88 7,20 9,06 5,40 6,80 10x/16mm 2,5 16,88 21,25 13,50 17,00 11,25 14,17 8,42 10,63 12,5x/16mm 0,4 3,37 4,25 2,70 3,40 2,25 2,83 1,69 2,13 12,5x/16mm 0,6 5,06 6,38 4,05 5,10 3,38 4,25 2,53 3,20 12,5x/16mm 1,0 8,44 10,63 6,75 8,50 5,63 7,08 4,20 5,30 12,5x/16mm 1,6 13,50 17,00 10,80 13,60 9,00 11,32 6,74 8,50 12,5x/16mm 2,5 21,08 26,58 16,88 21,25 14,08 17,71 10,54 13,33 TOP
2018-12-11 23:33:59
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8100986480712891, "perplexity": 13668.638348622999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823705.4/warc/CC-MAIN-20181211215732-20181212001232-00222.warc.gz"}
https://projecteuclid.org/euclid.ijde/1484881347
## International Journal of Differential Equations ### Existence of Positive Solutions for Higher Order $(p,q)$-Laplacian Two-Point Boundary Value Problems #### Abstract We derive sufficient conditions for the existence of positive solutions to higher order $(p,q)$-Laplacian two-point boundary value problem, $(-\mathrm{1}{)}^{{m}_{\mathrm{1}}+{n}_{\mathrm{1}}-\mathrm{1}}{[{\varphi }_{p}({u}^{(\mathrm{2}{m}_{\mathrm{1}})}(t))]}^{({n}_{\mathrm{1}})}={f}_{\mathrm{1}}(t,u(t),v(t))$, $t\in [\mathrm{0,1}]$, $\mathrm{}(-\mathrm{1}{)}^{{m}_{\mathrm{2}}+{n}_{\mathrm{2}}-\mathrm{1}}{[{\varphi }_{q}({v}^{({m}_{\mathrm{2}})}(t))]}^{(\mathrm{2}{n}_{\mathrm{2}})}={f}_{\mathrm{2}}(t,u(t),v(t))$, $t\in [\mathrm{0,1}]$, $\mathrm{}\mathrm{}{u}^{(\mathrm{2}i)}(\mathrm{0})=\mathrm{0}={u}^{(\mathrm{2}i)}(\mathrm{1})$, $i=\mathrm{0,1},\mathrm{2},\dots ,{m}_{\mathrm{1}}-\mathrm{1}$, ${[{\varphi }_{p}({u}^{(\mathrm{2}{m}_{\mathrm{1}})}(t))]}_{\text{at\hspace\{0.17em\}}t=\mathrm{0}}^{(j)}=\mathrm{0}$, $j=\mathrm{0,1},\dots ,{n}_{\mathrm{1}}-\mathrm{2}$; $[{\varphi }_{p}({u}^{(\mathrm{2}{m}_{\mathrm{1}})}(\mathrm{1}))]=\mathrm{0}$, ${[{\varphi }_{q}({v}^{({m}_{\mathrm{2}})}(t))]}_{\text{at\hspace\{0.17em\}}t=\mathrm{0}}^{(\mathrm{2}i)}=\mathrm{0}={[{\varphi }_{q}({v}^{({m}_{\mathrm{2}})}(t))]}_{\text{at\hspace\{0.17em\}}t=\mathrm{1}}^{(\mathrm{2}i)}$, $i=\mathrm{0,1},\dots ,{n}_{\mathrm{2}}-\mathrm{1}$, ${v}^{(j)}(\mathrm{0})=\mathrm{0}$, $j=\mathrm{0,1},\mathrm{2},\dots ,{m}_{\mathrm{2}}-\mathrm{2}$, and $v(\mathrm{1})=\mathrm{0}$, where ${f}_{\mathrm{1}},{f}_{\mathrm{2}}$ are continuous functions from $[\mathrm{0,1}]\times\mathrm{}\mathrm{}\mathrm{}\Bbb R{\mathrm{}}^{\mathrm{2}}$ to $[\mathrm{0},\mathrm{\infty })$, ${m}_{\mathrm{1}},{n}_{\mathrm{1}},{m}_{\mathrm{2}},{n}_{\mathrm{2}}\in \mathrm{}\mathrm{}\mathrm{}\Bbb N\mathrm{}$ and $\mathrm{1}/p+\mathrm{1}/q=\mathrm{1}$. We establish the existence of at least three positive solutions for the two-point coupled system by utilizing five-functional fixed point theorem. And also, we demonstrate our result with an example. #### Article information Source Int. J. Differ. Equ., Volume 2013 (2013), Article ID 743943, 9 pages. Dates Revised: 17 July 2013 Accepted: 17 July 2013 First available in Project Euclid: 20 January 2017 https://projecteuclid.org/euclid.ijde/1484881347 Digital Object Identifier doi:10.1155/2013/743943 Mathematical Reviews number (MathSciNet) MR3102796 Zentralblatt MATH identifier 1300.34056 #### Citation Kapula, Rajendra Prasad; Murali, Penugurthi; Rajendrakumar, Kona. Existence of Positive Solutions for Higher Order $(p,q)$ -Laplacian Two-Point Boundary Value Problems. Int. J. Differ. Equ. 2013 (2013), Article ID 743943, 9 pages. doi:10.1155/2013/743943. https://projecteuclid.org/euclid.ijde/1484881347 #### References • R. P. Agarwal, D. O'Regan, and P. J. Y. Wong, Positive Solutions of Differential, Difference and Integral Equations, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1999.MR1680024 • P. V. S. Anand, P. Murali, and K. R. Prasad, “Multiple symmetric positive solutions for systems of higher order boundary-value problems on time scales,” Electronic Journal of Differential Equations, vol. 2011, no. 102, pp. 1–12, 2011.MR2832278 • D. R. Anderson, “Eigenvalue intervals for even-order Sturm-Liouville dynamic equations,” Communications on Applied Nonlinear Analysis, vol. 12, no. 4, pp. 1–13, 2005.MR2163174 • R. I. Avery, “A generalization of the Leggett-Williams fixed point theorem,” Mathematical Sciences Research Hot-Line, vol. 3, no. 7, pp. 9–14, 1999.MR1702612 • R. I. Avery, J. M. Davis, and J. Henderson, “Three symmetric positive solutions for Lidstone problems by a generalization of the Leggett-Williams theorem,” Electronic Journal of Differential Equations, vol. 2000, no. 40, pp. 1–15, 2000.MR1764708 • J. M. Davis, J. Henderson, and P. J. Y. Wong, “General Lidstone problems: multiplicity and symmetry of solutions,” Journal of Mathematical Analysis and Applications, vol. 251, no. 2, pp. 527–548, 2000.MR1794756 • L. H. Erbe and H. Wang, “On the existence of positive solutions of ordinary differential equations,” Proceedings of the American Mathematical Society, vol. 120, no. 3, pp. 743–748, 1994.MR1204373 • D. J. Guo and V. Lakshmikantham, Nonlinear Problems in Abstract Cones, vol. 5 of Notes and Reports in Mathematics in Science and Engineering, Academic Press, Boston, Mass, USA, 1988.MR959889 • J. Henderson, P. Murali, and K. R. Prasad, “Multiple symmetric positive solutions for two-point even order boundary value problems on time scales,” Mathematics in Engineering, Science and Aerospace, vol. 1, no. 1, pp. 105–117, 2010. • J. Henderson and H. B. Thompson, “Multiple symmetric positive solutions for a second order boundary value problem,” Proceedings of the American Mathematical Society, vol. 128, no. 8, pp. 2373–2379, 2000.MR1709753 • K. R. Prasad, P. Murali, and N. V. V. S. Suryanarayana, “Multiple positive solutions for the system of higher order two-point boundary value problems on time scales,” Electronic Journal of Qualitative Theory of Differential Equations, vol. 2009, no. 32, pp. 1–13, 2009.MR2506153 • K. R. Prasad and P. Murali, “Solvability of p-Laplacian singular boundary value problems on time scales,” Advances in Pure and Applied Mathematics, vol. 3, no. 4, pp. 377–391, 2012.MR3024011
2019-10-16 10:45:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 21, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39666515588760376, "perplexity": 1392.815830321462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666959.47/warc/CC-MAIN-20191016090425-20191016113925-00019.warc.gz"}
https://www.csdn.net/tags/OtDaEgwsMzEzNjktYmxvZwO0O0OO0O0O.html
• ## yalmip 2020-02-13 12:38:14 1、这个东西是用过,但是不经常用,还是...简书类 https://www.jianshu.com/p/e1c45b3d8d8a(Yalmip使用学习) https://www.jianshu.com/p/0f9cb5a29e47 (YALMIP + MOSEK) 博客园 https://www.cnblogs.com/kane19... 1、这个东西是用过,但是不经常用,还是会忘记的,因此还是做个总结。 2、先把其他人的资料收藏一下,参考 简书类 https://www.jianshu.com/p/e1c45b3d8d8a(Yalmip使用学习) https://www.jianshu.com/p/0f9cb5a29e47 (YALMIP + MOSEK) 博客园 https://www.cnblogs.com/kane1990/p/3428129.html(yalmip + lpsolve + matlab 求解混合整数线性规划问题(MIP/MILP)) csdn类 https://blog.csdn.net/qq_16309049/article/details/91549610(YALMIP学习笔记-基础知识) https://blog.csdn.net/s83625981/article/details/80076478(yalmip使用指南) 忘记了,官网的推荐 https://yalmip.github.io/ 3、剩下的是自己的总结,等待中。。。 展开全文 • ## YalMip 千次阅读 2017-02-02 21:10:48 https://yalmip.github.io/ ...YALMIP is free of charge to use and is openly distributed, but note that Copyright owned by Johan Löfberg. YALMIP must be referenced (general reference, robust op https://yalmip.github.io/ YALMIP is free of charge to use and is openly distributed, but note that Copyright owned by Johan Löfberg. YALMIP must be referenced (general reference, robust optimization reference, sum-of-squares reference) when used in a published work (give me some credit for saving your valuable time!) YALMIP, or forks or versions of YALMIP, may not be re-distributed as a part of a commercial product unless agreed upon with the copyright owner (if you make money from YALMIP, let me in first!) YALMIP is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY, without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE (if your satellite crash or you fail your Phd due to a bug in YALMIP, your loss!). Forks or versions of YALMIP must include, and follow, this license in any distribution. For installation, see the installation tutorial, and get started coding here And don’t forget, most likely you have to install additional solvers https://github.com/yalmip/YALMIP/archive/master.zip Installation Updated: September 17, 2016 If it’s hard, you’re doing it wrong. Getting started Updated: September 17, 2016 Tutorial introduces essentially everything you’ll ever need. The remaining 95% is syntactic sugar. Linear programming Updated: September 17, 2016 As easy as it gets. Linear separation with linear norms. Updated: September 17, 2016 Almost as easy as linear programming. Be careful though, symbolics might start to cause overhead. Second order cone programming Updated: September 17, 2016 Ice-cream cone! Yummy. Semidefinite programming Updated: September 17, 2016 Who wudda thought? Optimization over positive definite symmetric matrices is easy. Determinant maximization Updated: September 17, 2016 Optimization with ellipsoids and likelihood functions are typical applications of determinant maximization. Duality Updated: September 17, 2016 Extract dual solutions from conic optimization problems. Sum-of-squares programming Updated: September 17, 2016 Almost nothing is a sum-of-squares, but let’s hope yours is. Robust optimization Updated: September 17, 2016 The only thing we can be sure of is the lack of certainty. Rank constrained semidefinite programming problems Updated: September 17, 2016 Learn how to constrain ranks in semidefinite programs Nonlinear operators - integer models Updated: September 17, 2016 Mixed-integer representations of nonlinear operators Nonlinear operators - graphs and conic models Updated: September 17, 2016 Epi- and hypograph conic representations of nonlinear operators Nonlinear operators - callbacks Updated: September 17, 2016 Callback representations of nonlinear operators Nonlinear operators Updated: September 17, 2016 Working with nonlinear operators in a structured and efficient fashion Multiparametric programming Updated: September 17, 2016 This tutorial requires MPT. Moment relaxations Updated: September 17, 2016 Moment relaxations allows us to find lower bounds on polynomial optimization problems using semidefinite programming Logic programming Updated: September 17, 2016 Logic programming in YALMIP means programming with operators such as alldifferent, number of non-zeros, implications and similiar combinatorial objects. Integer programming Updated: September 17, 2016 Undisciplined programming often leads to integer models, but in some cases you have no option. Global optimization Updated: September 17, 2016 The holy grail! 60% of the time it works every time. Geometric programming Updated: September 17, 2016 Geometric programming. Not about geometry. General convex programming Updated: September 17, 2016 YALMIP does not care, but for your own good, think about convexity also in general nonlinear programs. Exponential cone programming Updated: September 17, 2016 Convex conic optimization over exponentials and logarithms Envelope approximations for global optimization Updated: September 17, 2016 Outer approximations of function envelopes are the core of the global solver BMIBNB Complex-valued problems Updated: September 17, 2016 Complex data in optimization models. No problem in reality. Bilevel programming Updated: September 17, 2016 Bilevel programming using the built-in bilevel solver Big-M and convex hulls Updated: September 17, 2016 Learn how nonconvex models are written as integer programs using big-M strategies, and why it should be called small-M. Automatic dualization Updated: September 17, 2016 Primal or dual arbitrary in primal-dual solver? No, but YALMIP can help you reformulate your model. New release R20160930 Updated: September 30, 2016 Both patches and new features Sample-based robust optimization Updated: September 28, 2016 Unintended consequences of an improved optimizer framework Extensions on the optimizer Updated: September 28, 2016 Slice’n dice your problems MATLAB 2016 + CPLEX crash Updated: September 23, 2016 Boom! New release R20160923 Updated: September 23, 2016 It’s been a while… Debugging infeasible models Updated: September 22, 2016 Where to start? Octave support in YALMIP Updated: April 16, 2014 MATLAB no longer required! Recommended though. Worst-case norms of matrices Updated: February 08, 2014 Hard? Let’s try anyway. Updated: June 27, 2013 Using YALMIP objects and code in Simulink models, easy or fast, your choice. Unit commitment example - logic and integer programming Updated: January 30, 2013 A common application of integer programming is the unit commitment problem in power generation, i.e., scheduling of set of power plants in order to meet a cu... Updated: August 31, 2011 Common question: how can I solve a nonconvex QP using SeDuMi? Weird question, but interesting answer. Strictly feasible sum-of-squares solutions Updated: February 09, 2011 A question on the YALMIP forum essentially boiled down to how can I generate sum-of-squares solutions which really are feasible, i.e. true certificates? Work-shop material Updated: June 11, 2010 Files and exercise material from the YALMIP work-shop at the Swedish control conference 2010 Polytopic geometry using YALMIP and MPT Updated: June 11, 2010 Ever wondered how to compute the L1 Chebyshev ball? Tagging constraints Updated: August 29, 2009 Name your constraints for easy reference NaN in model Updated: August 29, 2009 Where why how? Computing multiple solutions in one shot Updated: August 29, 2009 Avoid that for-loop by using vector objectives Constraints without any variables Updated: August 29, 2009 Code works for almost all cases, but suddenly fails. New sum-of-squares example Updated: August 29, 2009 Added a sum-of-squares example focusing on pre- and post-processing capabilities. 展开全文 • yalmip_YALMIP例子_yalmip矩阵_简单例子_yalmip_YALMIP例子.zip • yalmip_YALMIP例子_yalmip矩阵_简单例子_yalmip_YALMIP例子_源码.zip • 求解线性矩阵不等式的简单例子,方便新手学习,yalmip工具箱教学 • yalmip_for_VRP_protectionska_yalmip遗传算法_vrp_cplex_yalmip.zip • 基于matlab软件使用yalmip工具箱的一些微网的仿真、运行小例子 • matlab 的 yalmip 工具箱,方便进行优化问题的设置及求解 • 与遗传算法、蚁群算法等智能算法不同的是,yalmip工具箱调用CPLEX软件得到的解是精确解,且计算时间可以得到保证 • matlab波形优化算法经常要用到的matlab yalmip toolbox工具箱 • Untitled10_yalmip微网_Untitled10matlab_yalmip例子_微网_yalmip_源码.zip • Untitled10_yalmip微网_Untitled10matlab_yalmip例子_微网_yalmip_源码.rar • Hence it fails when it encounters any kind of YALMIP related code. In practice this means that all YALMIP code has to be placed in a so called Interpreted MATLAB function. This implies that you ... • yalmip_for_VRP_protectionska_yalmip遗传算法_vrp_cplex_yalmip_源码.rar.rar • 基于MATLAB和Yalmip的2变量机组组合调度算法 • The MATLAB toolbox YALMIP is introduced. It is described how YALMIP can be used to model and solve optimization problems typically occurring in systems and control theory. • yalmip是一位“集大成者”,它不仅自己包含基本的线性规划求解算法,比如linprog(线性规划)、bintprog(二值线性规划)、bnb(分支界定算法)等,他还提供了对cplex、GLPK、lpsolve等求解工具包更高层次的包装。... • YALMIP 求解LMI的另外一个重要工具 2014 10 30最新版本 注意新版本中set命令已被移除 因此 只需要使用加号或串联号来定义约束 如下 F [x > 0 x < 32]; F [F x^2 < 1]; F F + [y < 10 [x y;y 1] > 0];... • 可以嵌套在MATLAB中,利用cplex求解,利用yalmip编程可以十分简单,更像自然语言 • ## yalmip简单的例子 万次阅读 多人点赞 2019-09-29 15:21:01 YALMIP应用篇 yalmip是一个在matlab内的建模工具包,能够用一套统一的建模语言来构建约束,调用其他的求解器,减少了单独学习其他语言的浪费,我根据论文 俞武扬. YALMIP工具箱在运筹学实验教学中的应用[J]. 实验室... YALMIP应用篇 yalmip是一个在matlab内的建模工具包,能够用一套统一的建模语言来构建约束,调用其他的求解器,减少了单独学习其他语言的浪费,我根据论文 俞武扬. YALMIP工具箱在运筹学实验教学中的应用[J]. 实验室研究与探索, 2017(8). 由于该论文的代码有错误,我改写了一下 以下例子为论文中的例子 1.一般线性规划 2.运输问题 3.背包问题 4.指派问题 5.最短路问题 在写程序之前要把求解器如cplex,gurobi等路径设置好 然后用yalmiptest来测试一下是否能够调用求解器 yalmip基本格式 创建决策变量目标函数 z约束条件设置 C参数设置 ops = sdpsetting('solver','Cplex','verbose',0); verbose:显示冗余度 0为只显示结果求解 result = solvesdp(C,z,ops) 一般线性规划 model min ⁡ Z = C X s.t. { A X = b X ⩾ 0 \begin{array}{l}{\min Z=C X} \\ {\text { s.t. }\left\{\begin{array}{l}{A X=b} \\ {X \geqslant 0}\end{array}\right.}\end{array} 例子 min ⁡ Z = 12 x 1 + 5 x 2 + 8 x 3 s.t. { 2 x 1 + 3 x 2 + x 3 ⩾ 30 4 x 1 + x 2 + 5 x 3 ⩾ 15 x 1 , x 2 , x 3 ⩾ 0 \min Z=12 x_{1}+5 x_{2}+8 x_{3}\\ \text { s.t. }\left\{\begin{array}{l}{2 x_{1}+3 x_{2}+x_{3} \geqslant 30} \\ {4 x_{1}+x_{2}+5 x_{3} \geqslant 15} \\ {x_{1}, x_{2}, x_{3} \geqslant 0}\end{array}\right. clear;clc;close all; c = [12 5 8]; A = [2 3 1; 4 1 5]; b = [30; 15]; %决策变量 x = sdpvar(3,1); %目标函数 z = c*x; %添加约束 %C = []; %C = [C; A*x >= b]; %C = [C;x>=0]; C=[A*x >= b,x>=0]; ops=sdpsettings('verbose',0); %求解 result = optimize(C,z,ops); if result.problem == 0 %求解成功 x_star=double(x) z_star=double(z) else disp('求解过程中出错'); end 警告: 文件: C:\Program Files\IBM\ILOG\CPLEX_Studio_Community128\cplex\matlab\x64_win64\@Cplex\Cplex.p 行: 965 列: 0 在嵌套函数中定义 "changedParam" 会将其与父函数共享。在以后的版本中,要在父函数和嵌套函数之间共享 "changedParam",请在父函数中显式定义它。 > In cplexoptimset In sdpsettings>setup_cplex_options (line 617) In sdpsettings (line 145) x_star = 0 9.6429 1.0714 z_star = 56.7857 ​ ​ 运输问题 model min ⁡ Z = ∑ i = 1 m ∑ j = 1 n c i j x i j s.t. { ∑ j = 1 n x i j ⩽ a i , i = 1 , 2 , ⋯ , m ∑ i = 1 m x i j ⩾ b j , j = 1 , 2 , ⋯ , n x i j ⩾ 0 , i = 1 , 2 , ⋯ , m ; j = 1 , 2 , ⋯ , n \min Z=\sum_{i=1}^{m} \sum_{j=1}^{n} c_{i j} x_{i j}\\ \text { s.t. }\left\{\begin{array}{ll}{\sum\limits_{j=1}^{n} x_{i j} \leqslant a_{i},} & {i=1,2, \cdots, m} \\ {\sum\limits_{i=1}^{m} x_{i j} \geqslant b_{j},} & {j=1,2, \cdots, n} \\ {x_{i j} \geqslant 0,} & {i=1,2, \cdots, m ; j=1,2, \cdots, n}\end{array}\right. 例子 clear;clc;close all; c = [1 3 5 7 13; 6 4 3 14 8; 13 3 1 7 4; 1 10 12 7 11]; a = [40 50 30 80]; b = [10 20 15 18 25]; %决策变量 x = intvar(4,5); %目标函数 z = sum(sum(c.*x)); %添加约束 C = []; for i=1:4 C = [C; sum(x(i,:))<=a(i)]; end for j=1:5 C = [C;sum(x(:,j))>=b(j)]; end C = [C;x>=0]; ops=sdpsettings('verbose',0); result = optimize(C,z,ops); if result.problem == 0 %求解成功 x_star = double(x) z_star = double(z) else disp('求解过程中出错'); end x_star = 2 20 0 18 0 0 0 10 0 0 0 0 5 0 25 8 0 0 0 0 z_star = 331 ​ ​ 背包问题 model max ⁡ Z = ∑ i = 1 n c i x i s.t. { ∑ i = 1 n x i w i ⩽ W ∑ i = 1 n x i v i ⩽ V 0 ⩽ x i ⩽ n i 且为整数 \max Z=\sum_{i=1}^{n} c_{i}x_{i} \\ \text { s.t. }\left\{\begin{array}{l}{\sum_{i=1}^{n} x_{i} w_{i} \leqslant W} \\ {\sum_{i=1}^{n} x_{i} v_{i} \leqslant V} \\ {0 \leqslant x_{i} \leqslant n_{i}\quad \text{且为整数}}\end{array}\right. 例子 clear;clc;close all; c = [8 1 11 12 9 10 9 5 8 3]; %效用 w = [17 19 3 19 13 2 6 11 20 20]; %重量 v = [2 10 10 5 9 2 5 10 8 10]; %体积 n = [5 2 4 3 5 4 3 1 5 3]; %数量 %决策变量 x = intvar(10,1); %目标函数 z = -(c*x); %添加约束 C = []; C = [C,w*x<=80]; C = [C,v*x<=60]; C = [C,0<=x<=n]; ops=sdpsettings('verbose',0); %求解 result = optimize(C,z,ops); if result.problem == 0 %求解成功 x_star = double(x) z_star = double(-z) else disp('求解过程中出错'); end x_star = 1 0 3 1 0 4 3 0 0 0 z_star = 120 ​ ​ 最短路问题 % 利用yamlip求解最短路问题 clear;clc;close all; n = size(D,1); % 决策变量 x = binvar(n,n,'full'); % 目标 z=sum(sum(D.*x)); % 约束添加 C=[]; C = [C,(sum(x(1,:))-sum(x(:,1))==1)]; C = [C,(sum(x(n,:))-sum(x(:,n))==-1)]; for i=2:(n-1) C = [C,(sum(x(i,:))-sum(x(:,i))==0)]; end ops=sdpsettings('verbose',0); % 求解 result=solvesdp(C,z,ops); if result.problem == 0 x_star = value(x) z_star = value(z) else disp('求解过程中出错'); end x_star = NaN 1 0 0 0 0 0 0 NaN 0 0 1 0 0 0 0 NaN 0 0 0 0 0 0 0 NaN 0 0 0 0 0 0 0 NaN 0 1 0 0 0 0 0 NaN 0 0 0 0 0 0 0 NaN z_star = 5 ​ ​ 指派问题 model min ⁡ Z = ∑ i = 1 n ∑ j = 1 n c i j x i j s.t. { ∑ i = 1 n x i j = 1 , j = 1 , 2 , ⋯ , n ∑ j = 1 n x i j = 1 , i = 1 , 2 , ⋯ , n x i j ∈ { 0 , 1 } , i , j = 1 , 2 , ⋯ , n \min Z=\sum_{i=1}^{n} \sum_{j=1}^{n} c_{i j} x_{i j}\\ \text { s.t. }\left\{\begin{array}{ll}{\sum_{i=1}^{n} x_{i j}=1,} & {j=1,2, \cdots, n} \\ {\sum_{j=1}^{n} x_{i j}=1,} & {i=1,2, \cdots, n} \\ {x_{i j} \in\{0,1\},} & {i, j=1,2, \cdots, n}\end{array}\right. 例子 clear;clc;close all; %决策变量 x = binvar(5,5,'full'); %目标函数 z = sum(sum(c.*x)); %添加约束 C = []; C = [C;sum(x,1)==1]; % 1 横向相加 C = [C;sum(x,2)==1]; % 2 纵向相加 ops=sdpsettings('verbose',0); %求解 result = optimize(C,z,ops); if result.problem == 0 %求解成功 x_star = double(x) z_star = double(z) else disp('求解过程中出错'); end c = 12 7 9 7 9 8 9 6 6 6 7 17 12 14 9 15 14 6 6 10 4 10 7 10 9 x_star = 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 z_star = 32 ​ ​ 展开全文 • yalmip作为一门建模语言,其语言通俗易懂,容易上手,由可调用不同的外部求解器 • yalmip工具箱,亲测有用 • 适用于MATLAB进行优化计算,YALMIP工具箱yalmip是一个matlab的工具包,通过matlab实现各种操作和调用。 其次,它是一个建模工具,甚至可以称为一种“语言”,通过这种“语言”来描述模型,然后再调用其他求解器(如... • 19年最新的YALMIP工具包。将压缩包解压到matlab/toolbox,在matlab主页界面上点击设置路径,选择 添加并包含子文件夹 将yalmip包中所有内容加入路径。matlab中调用yalmitest命令,查看是否添加完成。 • yalmip建模语言,可用于matlab,简化算法输入参数 • 1,将yalmip解压,在matlab中添加路径。 2,yalmiptest测试是否安装成功。 1,将yalmip解压,在matlab中添加路径。 2,yalmiptest测试是否安装成功。 展开全文 ...
2021-10-24 20:11:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8327862024307251, "perplexity": 12474.251546090427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587593.0/warc/CC-MAIN-20211024173743-20211024203743-00220.warc.gz"}
http://icpc.njust.edu.cn/Problem/Pku/2718/
# Smallest Difference Time Limit: 1000MS Memory Limit: 65536K [显示标签] ## Description Given a number of distinct decimal digits, you can form one integer by choosing a non-empty subset of these digits and writing them in some order. The remaining digits can be written down in some order to form a second integer. Unless the resulting integer is 0, the integer may not start with the digit 0. For example, if you are given the digits 0, 1, 2, 4, 6 and 7, you can write the pair of integers 10 and 2467. Of course, there are many ways to form such pairs of integers: 210 and 764, 204 and 176, etc. The absolute value of the difference between the integers in the last pair is 28, and it turns out that no other pair formed by the rules above can achieve a smaller difference. ## Input The first line of input contains the number of cases to follow. For each case, there is one line of input containing at least two but no more than 10 decimal digits. (The decimal digits are 0, 1, ..., 9.) No digit appears more than once in one line of the input. The digits will appear in increasing order, separated by exactly one blank space. ## Output For each test case, write on a single line the smallest absolute difference of two integers that can be written from the given digits as described by the rules above. ## Sample Input 1 0 1 2 4 6 7 ## Sample Output 28 ## Source Rocky Mountain 2005
2018-10-17 11:55:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5372298955917358, "perplexity": 234.8081716671548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511173.7/warc/CC-MAIN-20181017111301-20181017132801-00150.warc.gz"}
https://math.stackexchange.com/questions/2683601/show-that-sups-inft
Show that $\sup(S)=\inf(T)$. Suppose that $S$ is a nonempty subset of $\mathbb{R}$ and $S$ is bounded above. Also suppose that $T=\{x\in\mathbb{R}:x$ is an upper bound of $S\}$ Show that $\sup(S)=\inf(T)$. My proof: Since $S$ is nonempty bounded above subset of $\mathbb{R}$, $\sup(S)$ exists. Let $\sup(S)=\alpha$. It is clear that $\alpha\in T$ because $\alpha$ is an upper bound of $S$. Also, $\alpha\leq x$ for all $x\in T$ because $\alpha$ is a supremum. Since $\alpha\leq x$ and $\alpha\in T$, we can conclude that $\inf(T)=\alpha=\sup(S)$ as required. Is this proof ok? • Your proof is fine. Mar 9, 2018 at 9:17 • The ideas are all there, it seems to me. Two points you might want to go into more depth (since this is apparently an introductory course). Clarify how $\alpha \leq x$ because "$\alpha$ is a supremum". Also clarify how $\alpha \leq x$ and $\alpha \in T$ implies that $\alpha = \inf T$. Mar 9, 2018 at 9:18 • You should use \mathbb{R} to get the correct symbol for the real numbers Mar 9, 2018 at 9:28 Your work is fine, rephrasing a bit: $S \subset \mathbb{R}$, $S$ nonempty, is bounded above : $\sup(S)$ exists. $T \subset \mathbb{R}$, $T$ nonempty (why?), is bounded below by $s \in S$: $\inf(T)$ exists. $\sup(S)$ is the least upper bound of $S$, i.e. an upper bound, hence $\sup(S) \in T.$ Assume there is an $x \in T$ with $x \lt \sup(S)$. Since $x$ is an upper bound of $S$ : contradiction, $\sup(S)$ is the least upper bound of $S$. Hence for all $x \in T$: $x \ge \sup(S)$, and since $\sup(S) \in T$, we have $\inf(T)=\sup(S).$
2022-05-16 08:17:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9595004320144653, "perplexity": 141.6482385774313}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510097.3/warc/CC-MAIN-20220516073101-20220516103101-00389.warc.gz"}
https://www.matrix.edu.au/beginners-guide-year-10-maths/part-7-year-10-logarithms/
# Part 7: Year 10 Logarithms | Free Worksheet In this article, we take the mystery out of logarithms. ## Year 10 Logarithms In this article, we give you an overview of Year 10 Logarithms. Logarithims are used to calculate how loud something is, how acidic it might be, or how violent an earthquake is. Logarithms have important real world applications and will be an important element of Year 11 and 12 Maths. ## Outline of Year 10 Logarithms Being able to understand and work with logarithms effectively is an important skill, especially since they form the basis of practical scales such as the Richter scale, which is used to measure the magnitude or severity of an earthquake. This page will give examples of how to simplify logarithmic expressions using logarithmic laws, as well as an outline of the change of base formula for logarithms. ## NSW Syllabus Outcomes Below are the NESA expecations for Logarithms Stage 5.3: Use the definition of a logarithm to establish and apply the laws of logarithms (ACMNA265) • Define ‘logarithm’: the logarithm of a number to any positive base is the index when the number is expressed as a power of the base, ie. $$a^{x}=y⇔\log_a y=x$$, where $$a>0,y>0$$. • Deduce the following laws of logarithms from the laws of indices: • $$\log_a x+\log_a y=\log_{a} (xy)$$ • $$\log_a x-\log_a y=\log_a (\frac{x}{y})$$ • $$\log_a x^n= n \log_a x$$ • Apply the laws of logarithms to simplify simple expressions, eg. $$\log_{2}⁡8, \ \log_{81}⁡3, \ \log_{10⁡}25+\log_{10}⁡4, \ 3 \log_{10}⁡2+\log_{10⁡}(12.5), \ \log_2⁡18-2 \log_2⁡3$$ • Simplify expressions using the laws of logarithms, eg. simplify  $$5 \log_a⁡ a – \log_{a}⁡ a^{4}$$ Solve simple exponential equations (ACMNA270) • solve simple equations that involve exponents or logarithms, eg.  $$2^{t}=8, \ 4^{(t+1)}=\frac{1}{(8\sqrt2)}, \ \log_{27⁡}3=x, \ \log_{4}⁡x=-2$$ This looks pretty intimidating, but it shouldn’t be. All you’re doing here is learning what logarithms are. Similar to index laws, once you learn what logarithms are you will learn the laws that govern them and use them to solve equations. ## Assumed knowledge Students should be familiar with the definition of a logarithm and how to interchange between expressions involving logarithms and indices. We will also assume knowledge of the following result: $$\log_{a⁡}a=1$$ ## In this article We will discuss: • Logarithms of products and quotients • Logarithm of power • Change of base theorem ## Year 10 Logarithms Logarithms allow us to re-express equations of the form $$a^y=x$$  as  $$log_a⁡x=y$$, where $$a>0$$ and $$x>0$$. We read this as ‘log to the base $$a$$ of $$x$$ is $$y$$‘. For example, we can re-express $$2^3=8$$ as  $$log_2⁡8=3$$. Now, recall how we have the following laws for indices: • $$b^m×b^n=b^{m+n}$$ • $$b^m÷b^n=b^{m-n}$$ These indicial laws can be used to derive laws for logarithms as below. ## Logarithms of products and quotients We won’t go through the proofs here, but feel free to try them by yourself using the index laws listed above! • $$log_{b⁡}(xy)= \log_{b}⁡x+\log_{b}⁡y$$ • $$log_{b}(x÷y)= \log_{b} \frac{x}{y}=log_{b}⁡x-\log_{b}⁡y$$ You need to use these logarithmic laws when you’re asked to express the sum of logarithms as one single logarithm. Make sure that when you’re doing this, the logarithms you’re combining all have the same base (notice that all terms in the equations above are to the base). ### Example Express the following as a single logarithm: 1. $$\log_{5⁡}10+\log_{5}⁡7+log_{5}⁡2$$ 2. $$\log_{2⁡}(3y)+\log_2(22y)-\log_2⁡{11}$$ ### Solution 1. Since each of the logarithms is to the base $$5$$, we can apply the logarithmic law $$\log_{5}⁡x+log_{5}⁡y=log_{5}(xy)$$ to combine them. Applying this to the first two terms gives us $$\log_{5}⁡10+\log_{5}⁡7=\log_{5}(10×7)=log_{5}⁡70$$ Our whole expression then simplifies to $$log_{5}⁡70+\log_{5}⁡2$$. We can apply the same logarithmic law to combine these two logarithms as $$\log_{5}(70×2)=\log_{5}⁡140$$. Of course, we could also have combined the three logarithms in one go: $$log_{5}⁡10+\log_{5}⁡7+\log_{5}⁡2=\log_5(10×7×2)=\log_{5}⁡140$$ 2. Here we have pronumerals in the logarithms, but don’t worry as the logarithmic laws still apply! We can combine $$log_2⁡(3y)+log_2⁡(22y)$$ as $$\log_2⁡(3y×22y)=log_2⁡(66y^2 )$$. Our expression then reduces down to $$log_2⁡(66y^2 )-log_2⁡(11y)$$. Applying the logarithmic law $$\log_2⁡x-\log_2⁡y=\log_2⁡\frac{x}{y}$$ then allows us to simplify $$\log_2⁡(66y^2 )-\log_2⁡(11y)$$ as $$\log_2⁡\frac{66y^2}{11y}=\log_2⁡(6y)$$. You might also be asked some questions that require you to use these logarithmic laws the other way around. That is, instead of combining many logarithms into a single one, you might need to split a logarithm apart into separate logarithms. ### Example Use the following values $$\log_{3}⁡2≈0.631$$ and $$\log_{3}⁡5≈1.465$$ to evaluate $$\log_{3}⁡10$$. ### Solution Basically, the aim is to express $$\log_{3}⁡10$$ in terms of $$\log_{3}⁡2$$ and $$\log_{3}⁡5$$ so that we can use the approximations given. We can apply the logarithmic laws to write $$\log_{3⁡}10=\log_{3⁡}(2×5)=\log_{3}⁡2+\log_{3}⁡5$$. The approximations then give us $$\log_{3⁡}10≈0.631+1.465=2.096$$. ## Logarithm of a power Recall the index law: $$(b^m )^n=b^{mn}$$. From this it is possible to derive the following logarithmic law (prove it if you’re keen!): $$log_{b}(x^n)=n \log_{b}⁡x$$. This law can be used to remove the power inside the logarithm, taking it to the front instead. ### Example 1. Expand $$\log_{7⁡}8^2$$. 2. Expand $$log_{3⁡}(\frac{x^3 \sqrt{y}}{z})$$. ### Solution 1. We can take the power inside the logarithm to the front. This gives us $$2\log_{7}⁡8$$. 2. First, we use the quotient logarithmic law to expand the expression out as $$log_{3}(x^{3} \sqrt{y})-\log_{3}⁡z$$. The product logarithmic law then lets us expand this out as $$\log_{3⁡}(x^3 )+ \log_{3}\sqrt{⁡y}-\log_{3}⁡z=\log_{3}(x^3 )+ \log_3⁡y^{\frac{1}{2}} -log_{3}⁡z$$. Finally, applying the logarithm of a power law gives us $$3 \log_{3⁡}x+ \frac{1}{2} \log_{3}⁡y -log_{3}⁡z$$. ## Change of base theorem Calculators only let you calculate the logarithm of a number to two bases – specifically, base $$10$$ or base $$e$$. The log button on your calculator calculates the logarithm of a number to base $$10$$ whilst the $$ln$$ button does it for base $$e$$. But what if we want to compute the logarithm of a number to a different base? The change of base theorem allows for this, stating that: $$\log_{a}⁡n=\frac{\log_{b}⁡n}{\log_{b}⁡a}$$ The base $$b$$ can be any integer, but make sure that it is the same in the numerator as the denominator. Usually we use $$b=10$$ or $$b=e$$ so that we can use our calculator to evaluate the logarithm. ### Example 1. Evaluate $$\log_{25}⁡11$$. 2. Simplify $$\log_{7⁡}11×\log_{3}⁡7$$. 3. Solve for $$x$$ in the following: $$\log_{25}⁡3+\log_{5}⁡7=\log_5⁡x$$. ### Solutions 1. We can use the change of base formula to rewrite $$\log_{25}⁡11$$ in either base $$10$$ or base $$e$$. Change of base to base $$10$$ gives us $$\log_{25}⁡11=\frac{\log_{10}⁡11}{\log_{10}⁡25} ≈\frac{1.041}{1.398}=0.745$$. If we change to base $$e$$, we get $$\log_{25}⁡11=\frac{\log_{e}⁡11}{\log_{e}⁡25} ≈\frac{2.398}{3.219}=0.745$$, which is the same result. 2. Notice here that $$7$$ appears as the base of the first logarithm and as the number inside the second logarithm. This suggests to us that we should use the change of base formula on the first logarithm. We need to change the base to  $$3$$, so that we have a $$\log_{3}⁡7$$ on the denominator: $$\log_{7}⁡11=\frac{\log_{3}⁡11}{\log_{3}⁡7}$$ . The denominator then cancels out with the $$\log_{3⁡}7: \log_{7}⁡11× \log_{3}⁡7= \frac{\log_{3}⁡11}{\log_{3⁡}7} × \log_{3⁡}7=\log_{3}⁡11$$. 3. First, notice that two of the logarithms are to the base $$5$$, whilst the other one is to the base $$25$$ In order to solve the equation, we must have all the logarithms in the same base. Let’s use the change of base formula to convert $$\log_{25⁡}3$$ to base $$5: \log_{25}⁡3=\frac{\log_{5}⁡3}{\log_{5}25}$$ . Using the logarithmic power law changes this to $$\frac{\log_{5}⁡3}{\log_{5}25} =\frac{\log_{5}⁡3}{2\log_{5}⁡5 }=\frac{\log_{5}⁡3}{2}$$. Our equation then becomes: $$\frac{\log_{5}⁡3}{2}+\log_{5}⁡7=\log_{5}⁡x$$, which is the same as $$\log_{5}⁡3+2 \log_{5}⁡7=2 \log_{5}⁡x$$. We need to combine the left-hand side into a single logarithm so that we can compare with the right-hand side and find out what x is. Using a combination of logarithmic laws, we get: $$\log_5⁡(3×7^2)=\log_{5}⁡x^2$$, i.e. $$x^2=3×7^2$$ and $$x=±7\sqrt{3}$$. But note This is not the final answer! One thing you have to be careful about when solving logarithmic equations is that you satisfy any restrictions on $$x$$. Looking back at our original equation, we can see that x appears in the $$\log_5⁡x$$ term. Recall that any number inside a logarithm has to be positive $$(>0)$$. Therefore, we must have that $$x>0$$, i.e. $$x= 7 \sqrt{3}$$. ### Note to students Restrictions on solutions to logarithmic equations is just something that you have to keep an eye out for. It’s a good idea to make a note right at the start of the question what the restrictions on $$x$$ are (if any). Note that if, for instance, you had $$log_{5⁡}(x-3)$$ appearing in the equation (instead of $$log_{5⁡}x$$), you would need to check that your final answer satisfies $$x-3>0$$, i.e. $$x>3$$. ## Summary Use the following logarithmic laws to expand and simplify expressions involving logarithms, as well as solve logarithmic equations: • $$\log_{b}(xy)=\log_{b}⁡x+\log_{b}⁡y$$ • $$\log_{b}⁡(x÷y)=\log_b⁡ \frac{x}{y}=\log_{b}⁡x-\log_{b}⁡y$$ • $$\log_{b⁡}(x^n)=n \log_{b}⁡x$$ The change of base theorem is essential if you want to use your calculator to evaluate the logarithm of a number to a base that is not $$10$$ or $$e$$. It is also useful in simplifying certain logarithmic equations: • $$\log_{a}⁡n=\frac{log_{b}⁡n}{log_{b}⁡a}$$ When solving equations with logarithms, make sure to pay attention to any restrictions on solutions for x as a result of expressions with $$x$$ appearing inside any of the logarithms in the equation. ## Checkpoint Questions ### Questions Check your skills with the following 8 exercises! Express the following as a single logarithm: 1. $$log_3⁡15-log_3⁡5+log_3⁡8$$ 2. $$3 log_x⁡y+log_x⁡(yz)-log_x⁡(x+1)$$ Expand: 3. $$\log_{5}⁡125$$ 4. $$log_{10⁡} \frac{w\sqrt[3]{xy}}{z^2}$$ Use the following values $$\log_5⁡10≈1.431$$ and $$\log_5⁡3≈0.683$$ to evaluate: 5. $$\log_{5}⁡2700$$ Evaluate: 6.  $$\log_{5}⁡33$$ Simplify: 7. $$\log_3⁡7×\log_7⁡3×\log_3⁡9$$ Solve for $$x$$: 8. $$\log_9⁡(x-1)+\log_3⁡5=\log_3⁡15$$ ### Solutions 1. $$\log_3⁡\frac{15×8}{5}=log_3⁡24$$ 2. $$\log_{x} \frac {y^{3} \times yz}{x+1} = \log_{x} \frac{y^4 z}{x +1}$$ 3. $$3 \log_5⁡5=3$$ 4. $$3\log_{10}⁡w+\frac{1}{2} \log_{10}⁡x+\frac{1}{2} \log_{10}⁡y-2 log_{10}⁡z$$ 5. $$\log_5⁡(3^3×10^2)=3 \log_5⁡3+2 log_5⁡10≈3×0.683+2×1.431=4.911$$ 6. $$\frac{\log_{10}⁡33}{\log_{10}⁡5} ≈\frac{1.519}{0.699}≈2.173$$ 7. $$\frac{\log_7⁡7}{log_7⁡3} ×\log_7⁡3×2 \log_3⁡3=1×2=2$$ 8. $$\frac{\log_3⁡(x-1)}{log_3⁡9} =\log_{3}⁡15-log_{3}⁡5 \\ \frac{\log_3⁡(x-1)}{2}=log_{3⁡}3 \\ \log_3(x-1) = 2\\ x-1 = 3^2\\ x = 9+1\\ x = 10 \\$$ ## Thanks! Thanks for taking the time to read our Beginner’s Guide to Year 10 Maths. Year 10 Maths is really quite important and we want you to achieve your best. We hope that you’ve learnt something new from this subject guide, so now you can get out there and ace Mathematics! © Matrix Education and www.matrix.edu.au, 2019. Unauthorised use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Matrix Education and www.matrix.edu.au with appropriate and specific direction to the original content. ### Get free study tips and resources delivered to your inbox. Join 27,119 students who already have a head start. Our website uses cookies to provide you with a better browsing experience. If you continue to use this site, you consent to our use of cookies. Read our cookies statement.
2019-09-21 21:36:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8503111004829407, "perplexity": 530.7484970090516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574665.79/warc/CC-MAIN-20190921211246-20190921233246-00479.warc.gz"}
http://cms.math.ca/cmb/msc/46F15?fromjnl=cmb&jnl=CMB
location:  Publications → journals Search results Search: MSC category 46F15 ( Hyperfunctions, analytic functionals [See also 32A25, 32A45, 32C35, 58J15] ) Expand all        Collapse all Results 1 - 1 of 1 1. CMB 2001 (vol 44 pp. 105) Pilipović, Stevan Convolution Equation in $\mathcal{S}^{\prime\ast}$---Propagation of Singularities The singular spectrum of $u$ in a convolution equation $\mu * u = f$, where $\mu$ and $f$ are tempered ultradistributions of Beurling or Roumieau type is estimated by $$SS u \subset (\mathbf{R}^n \times \Char \mu) \cup SS f.$$ The same is done for $SS_{*}u$. Categories:32A40, 46F15, 58G07 top of page | contact us | privacy | site map |
2015-08-31 19:50:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43753647804260254, "perplexity": 5979.017773262803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066586.13/warc/CC-MAIN-20150827025426-00256-ip-10-171-96-226.ec2.internal.warc.gz"}