url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://planetmath.org/ProofOfClassEquationTheorem
# proof of class equation theorem $X$ is a finite disjoint union of finite orbits: $X=\cup_{i}Gx_{i}$. We can separate this union by considerating first only the orbits of 1 element and then the rest: $X=\cup_{j=1}^{l}\{x_{i_{j}}\}\cup\cup_{k=1}^{s}Gx_{i_{k}}=G_{X}\cup_{k=1}^{s}% Gx_{i_{k}}$ Then using the orbit-stabilizer theorem, we have $\#X=\#G_{X}+\sum_{k=1}^{s}[G:G_{x_{i_{k}}}]$ where for every $k$, $[G:G_{x_{i_{k}}}]\geq 2$, because if one of them were 1, then it would be associated to an orbit of 1 element, but we counted those orbits first. Then this stabilizers are not $G$. This finishes the proof. Title proof of class equation theorem ProofOfClassEquationTheorem 2013-03-22 14:20:52 2013-03-22 14:20:52 gumau (3545) gumau (3545) 4 gumau (3545) Proof msc 20D20
2021-04-11 00:43:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 7, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9071662425994873, "perplexity": 865.1874697015446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038060603.10/warc/CC-MAIN-20210411000036-20210411030036-00330.warc.gz"}
http://openstudy.com/updates/4f19c680e4b04992dd2251fc
## anonymous 5 years ago simplify ln(x^2+5x) =2ln(x+1) 1. anonymous you mean solve for x? 2. anonymous 3. anonymous start with $\ln(x^2+5x)=\ln((x+1)^2)$ and then since log is a one to one function this means $x^2+5x=(x+1)^2$ solve the quadratic equation 4. anonymous thank you, can you help me also with thise question below: thanks! http://openstudy.com/study?F5769872436922CDHW5=_#/updates/4f19c38be4b04992dd225045
2017-01-19 11:11:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8298671245574951, "perplexity": 5556.022077209722}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00530-ip-10-171-10-70.ec2.internal.warc.gz"}
http://ndnsim.net/2.3/guide-to-simulate-real-apps.html
# Simulating real NDN applications¶ The version of ndn-cxx library bundled with ndnSIM includes a modified version of ndn::Face to directly send and receive Interest and Data packets to and from the simulated instances of NFD. With this modification, ndnSIM enables support to simulate real NDN applications written against the ndn-cxx, if they satisfy requirements listed in this guide or can be modified to satisfy these requirements. ## Requirements¶ 1. Source code of the application must be available The application (parts of the application) needs to be compiled against the ndnSIM version of ndn-cxx library. 2. Source code should separate main function from the functional components of the application that will be simulated The entry point to the application (its functional component) will be NS-3 application class, which should be able to create and destroy an instance of the simulated component when scheduled by the scenario. 3. The application should not use global variables, if they define a state for the application instance ndnSIM should be able to create multiple different instances of the application, e.g., for each simulated node. Exception to this requirement is ndn::Scheduler: its implementation has been rewired to use NS-3’s scheduling routines. 4. The application MUST NOT contain any GUI or command-line terminal interactions 5. The application SHOULD NOT use disk operations, unless application instances access unique parts of the file system In the simulated environment, all application instances will be accessing the same local file system, which can result in undefined behavior if not properly handled. 6. The application MUST use a subset of ndn::Face API: • If the application create ndn::Face, it MUST BE created either with a default constructor or constructor that accepts a single boost::asio::io_service parameter. // Supported ndn::Face face1(); ndn::Face face2(ioService); // Not supported in ndnSIM ndn::Face face4(host_name, port_number) ndn::Face face3(transport); // and others • ndn::Face::getIoService() should be used only to obtain a reference to boost::asio::io_service. Application MUST NOT use any methods of boost::asio::io_service, otherwise the simulation will crash. ndn::Face face; ... // Supported (the rewired Scheduler implementation does not access io_service methods) Scheduler scheduler(face.getIoService()); // Not supported in ndnSIM and will result in crash face.getIoService().stop(); • Application should avoid use of Face::processEvents() or use it with caution In real applications, processEvents blocks until some data is received or the timeout callback is called. In this case, any variables created before calling this method will still exist after the method returns. However, in ndnSIM, such an assumption cannot be made, since the scope of a variable is local. void foo { ndn::Face face; face.expressInterest(...); face.setInterestFilter(...); // ndnSIM version of processEvents will not block! face.processEvents(); } // after existing foo scope, face variable is deallocated and all scheduled operations // will be canceled 7. Application (simulated component) MUST NOT create instances of boost::asio::io_service and use their methods boost::asio::io_service is inherently incompatible with NS-3, as both are providing mechanisms for asynchronous event processing. 8. We also recommend that functional part of the application accepts reference to the KeyChain instance, instead of creating instance itself. When simulating non-security aspects of the application, in simulation scenario it will be possible to use a dummy implementation of the KeyChain that does not perform crypto operations, but signs Data and Interests with fake signatures. For example, this can be achieved by enabling the constructor of the real application to accept a reference to the KeyChain: // Real applications should accept a reference to the KeyChain instance RealApp::RealApp(KeyChain& keyChain) : m_keyChain(keyChain) { } ## How to simulate real applications using ndnSIM¶ To simulate a real application, the simulation scenario should contain a class derived from ns3::Application. This class needs to create an instance of the ndn::Face and/or real application in the overloaded StartApplication method. This class also need to ensure that the created instance is not deallocated until StopApplication method is called. For example, if the functional class of the real application looks like: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 #include #include #include #include #include namespace app { class RealApp { public: RealApp(ndn::KeyChain& keyChain) : m_keyChain(keyChain) , m_faceProducer(m_faceConsumer.getIoService()) , m_scheduler(m_faceConsumer.getIoService()) { // register prefix and set interest filter on producer face m_faceProducer.setInterestFilter("/hello", std::bind(&RealApp::respondToAnyInterest, this, _2), std::bind([]{}), std::bind([]{})); // use scheduler to send interest later on consumer face m_scheduler.scheduleEvent(ndn::time::seconds(2), [this] { m_faceConsumer.expressInterest(ndn::Interest("/hello/world"), std::bind([] { std::cout << "Hello!" << std::endl; }), std::bind([] { std::cout << "Bye!.." << std::endl; })); }); } void run() { m_faceConsumer.processEvents(); // ok (will not block and do nothing) // m_faceConsumer.getIoService().run(); // will crash } private: void respondToAnyInterest(const ndn::Interest& interest) { auto data = std::make_shared(interest.getName()); m_keyChain.sign(*data); m_faceProducer.put(*data); } private: ndn::KeyChain& m_keyChain; ndn::Face m_faceConsumer; ndn::Face m_faceProducer; ndn::Scheduler m_scheduler; }; } // namespace app The corresponding NS-3 “entry point” application class can be like this: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 #include "ns3/ndnSIM/helper/ndn-stack-helper.hpp" #include "ns3/application.h" namespace ns3 { // Class inheriting from ns3::Application class RealAppStarter : public Application { public: static TypeId GetTypeId() { static TypeId tid = TypeId("RealAppStarter") .SetParent() .AddConstructor(); return tid; } protected: // inherited from Application base class. virtual void StartApplication() { // Create an instance of the app, and passing the dummy version of KeyChain (no real signing) m_instance.reset(new app::RealApp(ndn::StackHelper::getKeyChain())); m_instance->run(); // can be omitted } virtual void StopApplication() { // Stop and destroy the instance of the app m_instance.reset(); } private: std::unique_ptr m_instance; }; } // namespace ns3 Note There is a requirement that ndn::Face MUST BE created within the context of a specific ns3::Node. In simple words this means that ndn::Face constructor must be called somewhere within the overloaded StartApplication method. Attempt to create a ndn::Face outside ns3::Node (e.g., if the example included member variable Face m_face in RealAppStarter class) will result in simulation crash. The final step is to actually write a simulation scenario that defines network topology, routing information between nodes, on which nodes the application should be installed and when it should be started and stopped. For the trivial example, let us assume that we have only one simulation node and we want to start the application at time moment 6.5 seconds. This scenario can look like: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 #include "ndn-cxx-simple/real-app.hpp" #include "ndn-cxx-simple/real-app-starter.hpp" #include "ns3/core-module.h" #include "ns3/network-module.h" #include "ns3/ndnSIM-module.h" namespace ns3 { NS_OBJECT_ENSURE_REGISTERED(RealAppStarter); int main(int argc, char* argv[]) { CommandLine cmd; cmd.Parse(argc, argv); Ptr node = CreateObject(); ndn::StackHelper ndnHelper; ndnHelper.Install(node); ndn::AppHelper appHelper("RealAppStarter"); appHelper.Install(node) .Start(Seconds(6.5)); Simulator::Stop(Seconds(20.0)); Simulator::Run(); Simulator::Destroy(); return 0; } } // namespace ns3 int main(int argc, char* argv[]) { return ns3::main(argc, argv); } ## Example of a real application simulation¶ To demonstrate functionality of ndnSIM in a more complex and realistic case, we will use the NDN ping application included as part of NDN Essential Tools. For this example, we used a scenario template repository as a base to write simulation-specific extensions and define scenarios, and the final version of the scenario is available in GitHub. The following lists steps we did to simulate ndnping and ndnpingserver apps on a simple three-node topology: • imported the latest version of NDN Essential Tools source code as a git submodule • updated the build script (wscript) to compile the source code of ndnping and ndnpingserver (with the exception of compilation units that contain main function) against ndnSIM View changes • defined PingClient and PingServer classes to hold state of application instances View changes • defined PingClientApp and PingServerApp NS-3 applications, that create and destroy instances of PingClient and PingServer per NS-3 logic. View changes • defined a simple scenario that creates a three node topology, installs NDN stacks, and installs PingClientApp and PingServerApp applications on different simulation nodes. View changes After all these steps, the repository is ready to run the simulation (see README.md for more details). Note The listed steps did not include any modification of NDN Essential Tools source code. However, this was not the case when we initially attempted to run the simulation, as the source code was violating a few requirements of this guide. The changes that we made are an example of how to adapt the source code to be compatible with ndnSIM simulations.
2018-07-19 20:50:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19061516225337982, "perplexity": 2927.7668029542338}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591296.46/warc/CC-MAIN-20180719203515-20180719223515-00136.warc.gz"}
https://mathoverflow.net/questions/207879/introducing-meets-while-preserving-directed-closure
# Introducing meets while preserving directed closure A poset $\mathbb{P}$ is called well-met iff every pair of compatible conditions in $\mathbb{P}$ has a greatest lower bound. Question: Suppose $\mathbb{P}$ is a separative partial order which is $\lambda$-directed closed (for some regular infinite cardinal $\lambda$). Can we always view $\mathbb{P}$ as a dense suborder of a well-met poset which is still $\lambda$-directed closed? I'm vaguely aware that Boolean completions can screw up properties like directed closure, but I'm only asking for finite infima, not arbitrary infima. It's not clear to me that the obvious "well-met closure" of $\mathbb{P}$ is still $\lambda$-directed closed, but I also don't have a counterexample. • It sounds weird that Boolean completions can screw up closure properties. – Asaf Karagila May 28 '15 at 20:55 • @NoahSchweber No infinite complete Boolean algebra is even countably closed, since there must be countably infinite antichains, and you can make an $\omega$-descending sequence by joining the tail starting further and further out. These meet to zero, so the algebra is not countably closed. – Joel David Hamkins May 28 '15 at 21:11 • @Joel: Is it possible to characterize these sort of failures? Namely, "if $A\subseteq\mathcal B(\Bbb P)$ is a counterexample to some closure property of $\Bbb P$, then ..." or something like that? – Asaf Karagila May 28 '15 at 21:36 • These could be relevant: Sh1036 and Assaf's blogpost blog.assafrinot.com/?p=3841 – Ashutosh May 28 '15 at 21:39 • On Sh1036: I proof-read the current arxiv version (part of my job), and requested Shelah to make some changes. They should appear soon. I forgot to say that well-met condition does matter (forcing-wise). Assaf explains this well on his blogpost. – Ashutosh May 28 '15 at 22:00 $\newcommand\P{\mathbb{P}}$The answer is no. For a counterexample, consider the following partial order $\P$. On the bottom layer, we have countably many incompatible atoms $a_n$ for $n<\omega$. On a second layer, we have a collection of pairwise-incomparable elements $b_k$, with $a_n<b_k$ just in case $k\neq n$. So each $b_k$ is above all $a_n$ except $a_k$. In this sense, $b_k$ is like $\neg a_k$ in the Boolean algebra. This partial order is $\lambda$-directed closed for any $\lambda$, because any directed set can contain at most one atom $a_n$, and if it contains two different $b_k$'s then it must contain at least one $a_n$ below both of them. And in this case that $a_n$ will be a lower bound. Also, it is easy to check that $\P$ is separative. But I claim that $\P$ is not a dense suborder of any directed closed well-met partial order $\bar\P$. If it were, then consider the elements of $\bar\P$ given by $b_0$, $b_0\wedge b_1$, $b_0\wedge b_1\wedge b_2$ and so on. This is a descending sequence in $\bar\P$, but it can have no lower bound in $\bar\P$, since the atoms of $\P$ must be dense in $\bar\P$, but no $a_n$ is below all those finite meets, as $a_n$ is excluded once $b_n$ is included. One can make a non-atomic counterexample by replacing each atom with some $\lambda$-directed partial order and using the same argument otherwise. • The partial order $\P$ is just the atoms and co-atoms in a power set. – Joel David Hamkins May 29 '15 at 0:20
2021-06-20 00:54:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8915558457374573, "perplexity": 515.773763929385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487653461.74/warc/CC-MAIN-20210619233720-20210620023720-00550.warc.gz"}
https://math.stackexchange.com/questions/3126727/about-o-x-modules
# About $O_X$-modules? Consider $$F$$ an $$O_X$$-module is true in general that $$Hom_{O_X}(O_X,F) \equiv F(X)$$ ? Morover I need to prove that if $$F$$ is a flasque sheaf then $$H^n(X,F)=0$$ for $$n>0$$. I think is beacuse the fact that the section functor $$\Gamma(U,-)$$ keeps an exact sequence of sheaves $$0 \to F \to I \to H \to 0$$ to an exact sequence $$0 \to \Gamma(U,F) \to \Gamma(U,I) \to \Gamma(U,H) \to 0$$ This implies that the derived functors $$R\Gamma$$ are all $$0$$ it is correct? Thanks for the help!! • For your first question, yes $\operatorname{Hom}_{\mathcal{O}_X}(\mathcal{O}_X,\mathcal{F})=\mathcal{F}(X)$. This is a good exercice to construct the bijection with these two groups. The second question is more difficult but there is a proof in any books on the subject. Do you have a difficulty with the proof ? – Roland Feb 28 at 15:24 • The main problem was the first point but i wan try by myself, for the second part my purpose wa to understand better the proof in the Hartshorne's book. – andres Feb 28 at 22:45
2019-05-20 04:47:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9415462613105774, "perplexity": 193.7135970253515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255562.23/warc/CC-MAIN-20190520041753-20190520063753-00550.warc.gz"}
http://crypto.stackexchange.com/questions?page=2&sort=active&pagesize=50
# All Questions 119 views ### How to generate own secure elliptic curves? I know that the algorithm used to generate the Brainpool curves and the NIST curves is published. The algorithm should be this one (RFC5639 Appendix A). From what it looks like it's rather slow to ... 19 views ### Why is Chrome saying that “TLS_RSA_WITH_AES_128_CBC_SHA (0x2f)” is an obsolete cipher suite? [migrated] When I go to this site, Chrome Version 44.0.2403.89 is connecting to the server with TLS_RSA_WITH_AES_128_CBC_SHA (0x2f), and it states that this is an "obsolete cipher suite". For what reason is it ... 41 views ### Favor hash size or field size when systems are disparate? I'm working on an implementation of Krawczyk's Hashed MQV (HMQV). I'm using Crypto++, which is a C++ library. C++ has some features where classes that represent the crypto objects can be combined ... 120 views ### Why isn't Rabin-Williams cryptosystem widely used? I think we all know RSA. And of course we also know DJB (a.k.a. Daniel J. Bernstein). Now some already have noticed that he has an opinion towards cryptographic questions. In his 2008 paper ("RSA ... 24 views ### Proving existence of an encryption scheme that has indistinghuishable multiple encryptions in the presence of an eavesdropper, but is not CPA-secure [duplicate] I got stuck in trying to find a solution to the 3.7 exercise of the Katz-Lindell book. The exercise also assumes the existence of a pseudorandom function. The problem is that a multiple messages ... 4k views ### Why is Diffie-Hellman considered in the context of public key cryptography? In all textbooks I used the Diffie-Hellman key exchange is under "public key cryptography". As far as I can see it is a method to exchange a key to be used with a symmetric cryptographic algorithm, ... 221 views ### How does knowing the factorization of N help to obtain the secret? Assuming $x=a^2 \pmod n$ and knowing $x$, $p$, $q$ how is it possible to obtain $a$? 87 views ### How can I generate a good password from a SHA512 hash? I have to change local administrator passwords on machines. I don't want to store password in a database. I have to generate a password that I can find later to connect to the machine again. So I ... 20 views ### Why is Rabin encryption equivalent to factoring? I don't understand the proof of equivalence I've read everywhere (e.g., in Rabin's paper or on Wikipedia). Here's my objection: let's say you have a Rabin decryption oracle that takes ... 89 views ### What is “witness encryption”? I recently skimmed over tho papers on time-lock encryption: “Time-release Protocol from Bitcoin and Witness Encryption for SAT” by Liu, Garcia, and Ryan “How to Build Time-Lock Encryption” by Jager ... 61 views ### Explanation of part of a visual cryptography algorithm I have been working on a project involving visual cryptography and I am stuck with the following problem. My question is related to this paper, AN IMPROVED VISUAL CRYPTOGRAPHY SCHEME FOR SECRET ... 54 views 41 views ### Shuffleless PRNG function with non-repeating values? I need a simple PRNG function of type: Integer = PRNG(n, maxval) as I would like to count from 0 to maxval, not in a linear manner, but in a pseudorandom manner where I still use every value, but ... 22 views ### Chance to find HMAC key/salt when having knowledge of the hashed data? I'm not sure how to word this... I'm working with a HMAC (I think of it being a "salted hash"). I know the entire string being hashed, I do NOT know the salt. I also know the first 8 characters of the ...
2015-07-28 05:58:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.786292552947998, "perplexity": 2057.355435408592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042981576.7/warc/CC-MAIN-20150728002301-00030-ip-10-236-191-2.ec2.internal.warc.gz"}
https://www.biostars.org/p/218009/
Getting exon sequences for all human genes 2 1 Entering edit mode 6.1 years ago valerie ▴ 100 Hi everyone, I would be very grateful if you could help me. I want to download the sequences of all the exons for each human gene. I went to ensembl biomart and tried to do it for BRCA2 first (this was a random choice). First I selected the following attributes: Unspliced(gene), Exon start, Exon end, strand, Gene start. I thought that this information will be enough to 'cut out' all the exons. The first thing I noticed is that the number of exons is almost twice larger then the number I see on the Wiki. Then I tried to download directly Exon sequences, but the number of sequences was again larger the the number of exons should be and moreover several exon ids correspond to the same sequence. I also tried to download coding sequence and cDNA, but the length of both sequences is not consistent with 'official' BRCA2 length! These all drives me crazy and I have absolutely no idea on what to do. All I want is to get for each gene the sequences of its exons. Help me please! Thanks! genome gene sequence exon • 3.2k views 4 Entering edit mode 6.1 years ago sacha ★ 2.4k wget http://hgdownload.cse.ucsc.edu/goldenpath/hg19/database/refGene.txt.gz Then keep only uniq Gene name ( column 13 ) and extract coordinate ( chrom - exonstart - exonend) to a bed file. Column 10 and 11 contains exonsStart and exonEnd position separated by a comma. zcat refGene.txt.gz|sort -u -k13,13|cut -f3,10,11|awk 'BEGIN{OFS="\t"}{split($2,start,",");split($3,end,","); for(i=1;i<length(start);++i){print $1,start[i],end[i]}}' > exons.bed Finally, you can get exons sequence using bedtools getFasta : bedtools getfasta -fi hg19.fa -bed exons.bed -fo exon.fa paid attention to work only with chromsome name in range 1-22,X,Y. ADD COMMENT 0 Entering edit mode I just count the size of exome using the following commands : zcat refGene.txt.gz|sort -u -k 13,13|awk '{SUM=0;split($10,s,","); split($11,e,",");for(i=1;i<length(s);i++){SUM+=e[i] - s[i]};print SUM}'|paste -sd "+"|bc Divided by the human genom size from hg19, I get : 2.32 % ADD REPLY 0 Entering edit mode Dear Sacha, thank you very much, this is exactly what I need! May I also ask you one more question? I have headers in exons.fa file like "chr19:58346805-58347029". Is there any possibility to have a heading in a format "BRCA exon1"? Thank you in advance! ADD REPLY 0 Entering edit mode Solved this issue myself :) In case anyone needs this, you should run: zcat refGene.txt.gz|sort -u -k13,13|cut -f3,10,11,13|awk 'BEGIN{OFS="\t"}{split($2,start,",");split($3,end,","); for(i=1;i<length(start);++i){print$1,start[i],end[i],\$4="" "_"="" i}}'="" &gt;="" exons.bed<="" p=""> to add geneName_exodId to the bed file and then use -name option when running bedtools 1 Entering edit mode 6.1 years ago The issue you have with more sequences and exons is most likely due to alternative transcripts. I think bedtools getfasta can do what you want, if you supply a reference fasta and corresponding gtf or bed file of all exons (http://bedtools.readthedocs.io/en/latest/content/tools/getfasta.html). 0 Entering edit mode Thank you very much!! 0 Entering edit mode in bedtools getfasta it seems that the only option to have the exons concatenated is using -split which is available for bed12 format. my bed file is not in bed12 and each exon is in a separate row (though the name column of the bed file is the gene ID). how can I get the concatenated sequence?
2022-12-06 04:08:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2481243908405304, "perplexity": 4933.55101760969}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711069.79/warc/CC-MAIN-20221206024911-20221206054911-00279.warc.gz"}
https://proofwiki.org/wiki/Absolutely_Convergent_Product_Does_not_Diverge_to_Zero/Proof_2
# Absolutely Convergent Product Does not Diverge to Zero/Proof 2 ## Theorem Let $\struct {\mathbb K, \norm{\,\cdot\,}}$ be a valued field. Let the infinite product $\displaystyle \prod_{n \mathop = 1}^\infty \left({1 + a_n}\right)$ be absolutely convergent. Then it is not divergent to $0$. ## Proof We have that $\displaystyle \prod_{n \mathop = 1}^\infty \left({1 - \norm{a_n}}\right)$ is absolutely convergent. By Factors in Absolutely Convergent Product Converge to One, $\norm{a_n} < 1$ for $n\geq n_0$. Thus $\displaystyle \sum_{n \mathop = n_0}^\infty \log \left({1 - \norm{a_n}}\right)$ is absolutely convergent. Suppose that the product diverges to $0$. Then $\displaystyle \prod_{n=n_0}^\infty(1+a_n) = 0$. By Norm of Limit, $\displaystyle \prod_{n=n_0}^\infty \norm{1+a_n} = 0$. By the Triangle Inequality and Squeeze Theorem, $\displaystyle \prod_{n=n_0}^\infty(1- \norm{a_0}) = 0$. By Logarithm of Infinite Product of Real Numbers, $\displaystyle \sum_{n \mathop = n_0}^\infty \log \left({1 - \norm{a_n}}\right)$ diverges to $-\infty$. This is a contradiction. $\blacksquare$
2019-12-14 04:37:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.991654098033905, "perplexity": 423.2575820620761}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540584491.89/warc/CC-MAIN-20191214042241-20191214070241-00168.warc.gz"}
http://openstudy.com/updates/4df77e780b8b370c28bd5e83
chris 3 years ago what is the sum of 1/n^2 from 0 to infinity? 1. PreMedExpert I would like to know how tosolve that as well. 2. siddharth Isn't this the Euler Series? Yeah, I think this sum doesn't converge, but I'm not sure. The ratio test and root test seem to be inconclusive in this case. 4. imranmeah91 By integral test, it does converge 5. siddharth Oh right, this is the basel problem that Euler solved, which made him famous Fair point, it does. Good catch siddharth! 7. chris ah sorry, i should have said 1 to infinity :-). 1/0^2 can obviously not converge 8. turnand The answer is $\pi ^{2}/6$
2015-01-27 05:59:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6848677396774292, "perplexity": 1995.3478856598394}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115861027.55/warc/CC-MAIN-20150124161101-00108-ip-10-180-212-252.ec2.internal.warc.gz"}
https://www.encyclopediaofmath.org/index.php/Bellman%E2%80%93Harris_process
# Bellman-Harris process (Redirected from Bellman–Harris process) 2010 Mathematics Subject Classification: Primary: 60J80 [MSN][ZBL] A Bellmann Harris process is a special case of an age-dependent branching process (cf. Branching process, age-dependent). It was first studied by R. Bellman and T.E. Harris [BH]. In the Bellman–Harris process it is assumed that particles live, independently of each other, for random periods of time, and produce a random number of new particles at the end of their life time. If $G(t)$ is the distribution function of the life times of the individual particles, if $h(s)$ is the generating function of the number of direct descendants of one particle, and if at time $t=0$ the age of the particle was zero, then the generating function $F(t,s)={\rm E}s^{\mu(t)}$ of the number of particles $\mu(t)$ satisfies the non-linear integral equation $$F(t,s) = \int_0^th(F(t-u,s))dG(u) + s(1-G(t)).$$ If $$G(t)=1-e^{-\lambda t},\quad t\ge 0,$$ the Bellman–Harris process is a Markov branching process with continuous time. #### References [BH] R. Bellman, T.E. Harris, "On the theory of age-dependent stochastic branching processes" Proc. Nat. Acad. Sci. USA , 34 (1948) pp. 601–604 MR0027466 How to Cite This Entry: Bellman–Harris process. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Bellman%E2%80%93Harris_process&oldid=22084
2018-12-18 15:29:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6367573738098145, "perplexity": 628.7309342833356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829429.94/warc/CC-MAIN-20181218143757-20181218165757-00246.warc.gz"}
http://mathhelpforum.com/advanced-algebra/104966-idempotents-projective-r-modules-print.html
# Idempotents and Projective R-Modules • Sep 29th 2009, 03:06 AM robeuler Idempotents and Projective R-Modules Let e in R be idempotent. Show that the ideal Re is a projective R module. I want to do this without explicitly lifting. But I am struggling to write Re + M = N such that N is free. • Sep 29th 2009, 03:18 AM NonCommAlg Quote: Originally Posted by robeuler Let e in R be idempotent. Show that the ideal Re is a projective R module. I want to do this without explicitly lifting. But I am struggling to write Re + M = N such that N is free. Hint: $Re \oplus R(1-e)=R.$ • Apr 16th 2013, 08:48 AM swang Re: Idempotents and Projective R-Modules Do this hint work? I think R(1-e) = R because 1-e is unit by (1-e)(1+e)=1 if e is idempotent.
2017-01-23 17:05:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8558017015457153, "perplexity": 2009.8643550504055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00252-ip-10-171-10-70.ec2.internal.warc.gz"}
http://samjshah.com/2011/03/17/books-books-books/
# Books! Books! Books! I’ve been a bit incommunicado lately. Nothing bad has happened! I’m not working harder than normal! I don’t know why but I haven’t been moved to post anything. And you know my feeling about blogging — it can’t be a chore, so don’t force it. That being said, I wanted to share something we’ve been diverted by in multivariable calculus recently: I gotta say, I love this class and I love working with these kids. They remind me how when you find the right thing, exploration is captivating. This is the “book overhang problem.” The question we dealt with was: can you stack books at the edge of a table so that the top book is off the table completely (meaning if you’re looking down on the stack of the books, the top book doesn’t lie over the table at all)? We haven’t yet found the optimal solution, but we’re going to be discussing our musings on Friday — what the best 3, 4, and 5 book configuration might be, and if we can generalize it. 1. The solution I’m familiar with has an overhang of 1/(n+1) for the nth book from the top. That allows an unbounded extension (see harmonic function H_n), but it gets difficult to construct after 5 or 6. 1. The wolfram analysis is correct. I misremembered the result slightly. You can do the analysis fairly easily by using the lever law, treating the stack above the current book as having weight (n-1) centered at the end of the current book. You want to push the book out by d until the whole stack is balanced with its centroid above 0. That is, $\int_0^d x dx + (n-1)d =$int_0^(1-d) x dx $, or $d^2/2 + (n-1)d = (1-d)^2/2$, or $2(n-1)d = 1 - 2d$, or$d = 1/(2n)\$. 2. That’s similar to what my kids did! 2. Yay, I love this! I did something similar for my first interview when I had to bring a lesson to get hired. I found using cd-cases works well to get things to not tilt, but I think the huge calc books pack more visual punch. 3. “The solution I’m familiar with has an overhang of 1/(n+1) for the nth book from the top. That allows an unbounded extension (see harmonic function H_n), but it gets difficult to construct after 5 or 6.” That’s the solution I’m also familiar with, or at least I was until a student a few years ago questioned whether this was an optimal solution and found a counterexample for a few individual cases. Have fun exploring that one! 1. I’d be interested in seeing one of these counterexamples, as I thought there was a simple inductive proof that the sum of 1/n is optimal. 4. “I’d be interested in seeing one of these counterexamples” Tried to think of a hint that won’t give it all away…best I could come up with is to consider a case where the books aren’t all stacked on top of one another. 1. If the books aren’t stacked, it’s a different problem. What is the new statement of the problem then? 5. Hey Sam, thanks for the illustration here. I was in a room at a table the day after you posted this with a couple of guys, one trying to explain the problem to the other, and I was able to pull up your photo. Which was awesome. 6. Andy Johnson says: That looks like an ideal use for textbooks! Finally they are good for something!
2015-08-29 23:28:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.378947377204895, "perplexity": 756.0689472265152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064590.32/warc/CC-MAIN-20150827025424-00260-ip-10-171-96-226.ec2.internal.warc.gz"}
https://docs.belle2.org/record/2652?ln=en
BELLE2-CONF-PH-2021-013 Exclusive $B \to X_u \ell \nu_\ell$ Decays with Hadronic Full-event-interpretation Tagging in 62.8\invfb of Belle II Data Moritz Bauer ; Pablo Goldenzweig ; Nadia Toutounji ; Kevin Varvell 09 September 2021 Abstract: We present a reconstruction in early data of the semileptonic decay $B^+ \to \pi^0 \ell^+ \nu_\ell$, and first results of a reconstruction of the decays $B^+ \to \rho^0 \ell^+ \nu_\ell$ and $B^0 \to \rho^- \ell^+ \nu_\ell$ in a sample corresponding to 62.8\invfb of Belle II data using hadronic $B$-tagging via the full-event-interpretation algorithm. We determine the total branching fractions via fits to the distribution of the square of the missing mass, with \bfpiz{} = (8.29 $\pm$ 1.99$_{\mathrm{stat}}$ $\pm$ 0.46$_{\mathrm{sys}}$) $\times 10^{-5}$, \bfrhoz{} = ($9.26 \pm 6.33_{\mathrm{stat}}$ $\pm$ 0.38$_{\mathrm{sys}}$) $\times 10^{-5}$ and \bfrhoc = ($1.51 \pm 1.13_{\mathrm{stat}}$ $\pm$ 0.09$_{\mathrm{sys}}$) $\times 10^{-4}$. We also quote an updated branching fraction for the $B^0 \to \pi^- \ell^+ \nu_\ell$ decay, \bfpic{} = (1.47 $\pm$ 0.29$_{\mathrm{stat}}$ $\pm$ 0.05$_{\mathrm{sys}}$) $\times 10^{-4}$, based on the sum of the partial branching fractions in three bins of the momentum transfer to the leptonic system. Keyword(s): Full-Event-Interpretation ; Hadronic Tagging ; Exclusive semi-leptonic The record appears in these collections: Conference Submissions > Papers
2021-09-23 02:39:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7364426255226135, "perplexity": 6349.480440566807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057416.67/warc/CC-MAIN-20210923013955-20210923043955-00318.warc.gz"}
http://math.eretrandre.org/tetrationforum/showthread.php?tid=333&pid=3740&mode=threaded
• 0 Vote(s) - 0 Average • 1 • 2 • 3 • 4 • 5 Real and complex behaviour of the base change function (was: The "cheta" function) bo198214 Administrator Posts: 1,389 Threads: 90 Joined: Aug 2007 08/15/2009, 09:44 PM (This post was last modified: 08/15/2009, 10:06 PM by bo198214.) (08/15/2009, 07:13 PM)jaydfox Wrote: Simply pick all the points whose real part is equal to log((2k+1)*pi), k a non-negative integer, and whose imaginary part is equal to (2m+1)/2*pi, for m an integer. Exponentiating once will get you to +/- (2k+1)*pi*i, which exponentiating again will get you to -1. Thats good for visualization. Quote:Note that this grid of points covers the entire right half of the complex plane, so that when we iteratively perform logarithms, we can always find points close to the real line. But still I dont get this. Iterated (branches of) logarithms take any point to one of the fixed points of exp. Why do they come arbitrary close to the real axis with increasing n? I hope the picture will clarify. Even if this is the case, and I trust you enough to believe it, the bigger question is whether the singularities come arbitrarily close to *any* point of the real axis where the base change is defined. Or are there points with a certain neighborhood that does not contain a singularity of any $f_n$? If so, we would just use such a point for the powerseries development. Quote:because we have to wind in and around singularities to make sure we are in the primary branch of the logarithm of base eta. I think we anyway have to specify a cut system, as we have (branching) singularities. The functions $f_n$ are multivalued (if we continue passing a cut). If I understand that right, you set the cuts of $f_n$ to be the points that would be mapped to the negative real axis of any involved $\log_\eta$. See, if you define the value via a path, then you will usually get different values for non-homotopic paths (i.e. any deformation of one into the other would cross a singularity). So this is the same as a multivalued function. (The situation gets even worse if singularities appear on certain but not all branches.) The proper path that leads to using primary branch of the logarithm would then be a path that does not cross any cut of $f_n$. I think the cut system is like a labyrinth and hope you can provide a picture of it. « Next Oldest | Next Newest » Messages In This Thread Real and complex behaviour of the base change function (was: The "cheta" function) - by bo198214 - 08/12/2009, 08:59 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function) - by jaydfox - 08/15/2009, 12:54 AM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by jaydfox - 08/15/2009, 05:00 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by bo198214 - 08/15/2009, 05:36 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by jaydfox - 08/15/2009, 06:40 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by jaydfox - 08/15/2009, 07:13 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by bo198214 - 08/15/2009, 09:44 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by jaydfox - 08/15/2009, 10:40 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by jaydfox - 08/15/2009, 10:46 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by jaydfox - 08/15/2009, 11:02 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by jaydfox - 08/15/2009, 11:20 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by bo198214 - 08/16/2009, 11:15 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by jaydfox - 08/16/2009, 11:38 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by bo198214 - 08/17/2009, 08:50 AM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by jaydfox - 08/17/2009, 12:07 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by sheldonison - 08/17/2009, 04:01 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by bo198214 - 08/17/2009, 04:30 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by jaydfox - 08/17/2009, 05:26 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by sheldonison - 08/18/2009, 04:37 AM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by jaydfox - 08/17/2009, 05:47 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function) - by bo198214 - 08/17/2009, 02:40 PM base change with decremented exponential - by bo198214 - 08/18/2009, 08:47 AM singularities of base change eta -> e - by bo198214 - 08/18/2009, 06:51 PM RE: singularities of base change eta -> e - by bo198214 - 08/20/2009, 10:28 AM RE: Does the limit converge in the complex plane? - by sheldonison - 08/13/2009, 12:49 AM RE: Does the limit converge in the complex plane? - by bo198214 - 08/13/2009, 07:17 AM RE: Does the limit converge in the complex plane? - by sheldonison - 08/13/2011, 10:32 AM RE: Does the limit converge in the complex plane? - by bo198214 - 08/13/2011, 06:33 PM RE: Does the limit converge in the complex plane? - by sheldonison - 08/13/2009, 06:48 PM Possibly Related Threads... Thread Author Replies Views Last Post New mathematical object - hyperanalytic function arybnikov 4 1,054 01/02/2020, 01:38 AM Last Post: arybnikov Constructing real tetration solutions Daniel 4 995 12/24/2019, 12:10 AM Last Post: sheldonison Complex Tetration, to base exp(1/e) Ember Edison 7 3,027 08/14/2019, 09:15 AM Last Post: sheldonison Is there a function space for tetration? Chenjesu 0 667 06/23/2019, 08:24 PM Last Post: Chenjesu Can we get the holomorphic super-root and super-logarithm function? Ember Edison 10 4,804 06/10/2019, 04:29 AM Last Post: Ember Edison Degamma function Xorter 0 1,123 10/22/2018, 11:29 AM Last Post: Xorter b^b^x with base 0 Users browsing this thread: 1 Guest(s)
2020-06-02 21:09:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 5, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7212522625923157, "perplexity": 2788.966447999462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347426801.75/warc/CC-MAIN-20200602193431-20200602223431-00506.warc.gz"}
https://socratic.org/questions/5548d8cb581e2a142a8c20f5
# Question #c20f5 May 5, 2015 The easiest way to balance this chemical equation is to go by ions, not by atoms. If you write the complete ionic equation for this reaction, you'll get $3 N {H}_{4 \left(a q\right)}^{+} + P {O}_{4 \left(a q\right)}^{3 -} + C {a}_{\left(a q\right)}^{2 +} + 2 C {l}_{\left(a q\right)}^{-} \to C {a}_{3} {\left(P {O}_{4}\right)}_{2 \left(s\right)} + N {H}_{\textrm{4 \left(a q\right]}}^{+} + C {l}_{\left(a q\right)}^{-}$ If you were to eliminate spectator ions, i.e. the ions that are present on both sides of the equation, you'll get the net ionic equation, which looks like this $P {O}_{4 \left(a q\right)}^{3 -} + C {a}_{\left(a q\right)}^{2 +} \to C {a}_{3} {\left(P {O}_{4}\right)}_{2 \left(s\right)}$ Notice that you need 3 calcium atoms on the products' side, but only have 1 calcium cation of the reactants' side $\to$ multiply the calcium cation by 3. Likewise, multiply the phosphate anions by 2 to get them to match the number of phosphate ions present on the products' side. This will get you $\textcolor{red}{2} P {O}_{4 \left(a q\right)}^{3 -} + \textcolor{b l u e}{3} C {a}_{\left(a q\right)}^{2 +} \to C {a}_{3} {\left(P {O}_{4}\right)}_{2 \left(s\right)}$ Now take these stoichiometric coefficients and use them in the overall equation $\textcolor{red}{2} {\left(N {H}_{4}\right)}_{3} P {O}_{\textrm{4 \left(a q\right]}} + \textcolor{b l u e}{3} C a C {l}_{2 \left(a q\right)} \to C {a}_{3} {\left(P {O}_{4}\right)}_{2 \left(s\right)} + N {H}_{4} C {l}_{\left(a q\right)}$ Now you only have to balance the ammonium ions, $N {H}_{4}^{+}$, and the $C l$. Notice that you have 6 of each on the reactants' side, so multiply the ammonium chloride by 6 to balance everything out $\textcolor{red}{2} {\left(N {H}_{4}\right)}_{3} P {O}_{\textrm{4 \left(a q\right]}} + \textcolor{b l u e}{3} C a C {l}_{2 \left(a q\right)} \to C {a}_{3} {\left(P {O}_{4}\right)}_{2 \left(s\right)} + \textcolor{g r e e n}{6} N {H}_{4} C {l}_{\left(a q\right)}$
2019-12-08 00:03:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9028134942054749, "perplexity": 1254.5355746397424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540503656.42/warc/CC-MAIN-20191207233943-20191208021943-00202.warc.gz"}
https://lookformedical.com/en/sites/molecular-conformation
You searched for: Language English Remove constraint Language: English Publisher American Chemical Society Remove constraint Publisher: American Chemical Society Subject Molecular Structure Remove constraint Subject: Molecular Structure Subject Molecular Conformation Remove constraint Subject: Molecular Conformation ... You searched for: Publisher American Chemical Society Remove constraint Publisher: American Chemical Society Subject Molecular Structure Remove constraint Subject: Molecular Structure Subject Molecular Conformation Remove constraint Subject: Molecular Conformation ... Cyclohexane Chair Conformation Video Tutorial as part of the Video Series by Leah4Sci- Learn about Chair Conformations including axial and equatorial interactions and stability with this step by step organic chemistry tutorial video A scan conversion process is performed on a polygon using a single pass technique. The pixels which comprise the edges and vertices of the polygon are first determined from the vertices which define the polygon. The alpha channel comprises either a sub-pixel mask associated with each pixel which indicates the amount and sub-pixel regions of coverage or a single value indicative of the percentage of coverage of a pixel. Furthermore, a z value indicative of the depth of each pixel is maintained. The pixels between the edge pixels of the polygon are then turned on, thereby filling the polygon. The pixels which comprise the polygon are then composited with the background pixels on a per pixel basis. The depth value of each pixel of the polygon (the z value) is used to determine the compositing equations to be used to composite each pixel of the polygon to the background. The compositing equations update the color of the pixel, the z buffer value of the background pixel and the sub-pixel mask to reflect the We study a generalization of the classical problem of illumination of polygons. Instead of modeling a light source we model a wireless device whose radio signal can penetrate a given number k of walls. We call these objects k-modems and study the minimum number of k-modems necessary to illuminate monotone and monotone orthogonal polygons. We show that every monotone polygon on n vertices can be illuminated with n 2k k-modems and exhibit examples of monotone polygons requiring l n 2k+2 m k-modems. For monotone orthogonal polygons, we show that every such polygon on n vertices l can be illuminated with n 2k+4 m k-modems and give examples which require l n 2k+4 m k-modems for k even and l n 2k+6 m for k odd ... Lecture Notes in Computer Science, vol 12141. Springer, Cham. J. Torres, N. Hitschfeld, R. O. Ruiz and A. Ortiz-Bernardin. Abstract. In this work, we propose new packing algorithm designed for the generation of polygon meshes to be used for modeling of rock and porous media based on the virtual element method. The packing problem to be solved corresponds to a two-dimensional packing of convex-shape polygons and is based on the locus operation used for the advancing front approach. Additionally, for the sake of simplicity, we decided to restrain the polygon rotation in the packing process. Three heuristics are presented to simplify the packing problem: density heuristic, gravity heuristic and the multi-layer packing. The decision made by those three heuristic are prioritizing on minimizing the area, inserting polygons on the minimum Y coordinate and pack polygons in multiple layers dividing the input in multiple lists, respectively. Finally, we illustrate the potential of the generated meshes by ... The synthesis of two pairs of the title diastereomers, which represent conformationally constrained analogues of the phenylcarbamate local anesthetics, is described. The synthesis was accomplished by starting from cycloheptanone and 2-alkoxyanilines and the intermediate diastereomers of 2-aminomethylcycloalkanols (VI, VII) were separated as their 4-nitrobenzoyl derivatives (IV, V) by extraction and fractional crystallization. The prepared compounds (VIIIa, VIIIb, IXa, and IXb) are assumed to be of help in interpreting the structure activity relationships within this class of drugs.. ... When the first of these half-way boundaries are crossed, the corresponding polygon is chosen, the half-way point defined, and also a trace, a part of the polygon outline, is selected as follows: The start point and the half-way point define a direction. From the start point inside the angle range 30-60 degrees left one looks for a polygon point to act as trace endpoint (full blue circle). If there are several, one chooses the best as endpoint, which usually means the point nearest to the 45 degrees direction. If there is no point within that angle range but a polygon face, then a candidate endpoint (open circle) is placed on it at 45 degrees. Then repeat for the angle range to the right. There is a continuous polygon trace between two endpoints (blue), any segment of it connected to an endpoint is an endpoint segment. There may be intermediate points and intermediate segments (violet). Possibly this trace should be highlighted at that point in the GUI. ... Definition of frequency polygon in the Legal Dictionary - by Free online English dictionary and encyclopedia. What is frequency polygon? Meaning of frequency polygon as a legal term. What does frequency polygon mean in law? Bojarski, A. J.; Paluchowska, M. H.; Duszyńska, B.; Kłodzińska, A.; Tatarczyńska, E.; Chojnacka-Wójcik, E. 1-Aryl-4-(4-succinimidobutyl)piperazines and their conformationally constrained analogues: synthesis, binding to serotonin (5-HT1A, 5-HT2A, 5-HT7), α1-adrenergic and dopaminergic D2 receptors, and in vivo 5-HT1A functional characteristics. Bioorg. Med. Chem. 2005, 13, 2293-3303 (http://www.ncbi.nlm.nih.gov/pubmed/15727878 ... A system is disclosed for clipping three-dimensional polygons for use in a computer-graphics display. The system removes from each polygon all parts that lie outside an arbitrary, plane-faced, convex polyhedron, e.g. a truncated pyramid defining a viewing volume. The polygon is defined by data representing a group of vertices and is clipped separately in its entirety against each clipping plane (of the polyhedron). In a multiple-stage structure as disclosed, each stage clips the polygon against a single plane and requires storage for only two vertex values. A time-sharing embodiment of the system is also disclosed. The disclosed system also incorporates the use of a perspective transformation matrix which provides for arbitrary field-of-view angles and depth-of-field distances while utilizing simple, fixed clipping planes. DOCTYPE html, ,html, ,head, ,meta name=viewport content=initial-scale=1.0, user-scalable=no, ,meta charset=utf-8, ,title,Polygon with Hole,/title, ,style, /* Always set the map height explicitly to define the size of the div * element that contains the map. */ #map { height: 100%; } /* Optional: Makes the sample page fill the window. */ html, body { height: 100%; margin: 0; padding: 0; } ,/style, ,/head, ,body, ,div id=map,,/div, ,script, // This example creates a triangular polygon with a hole in it. function initMap() { var map = new google.maps.Map(document.getElementById(map), { zoom: 5, center: {lat: 24.886, lng: -70.268}, }); // Define the LatLng coordinates for the polygons outer path. var outerCoords = [ {lat: 25.774, lng: -80.190}, {lat: 18.466, lng: -66.118}, {lat: 32.321, lng: -64.757} ]; // Define the LatLng coordinates for the polygons inner path. // Note that the points forming the inner path are wound in the // opposite direction to those in the outer path, to form the ... The first phase of the 5-year Monarch Pass Vegetation Management Project aims to treat the yellow priority polygons shown below. While this initial effort will not cover the entirety of these areas, work is on track for the majority of these polygons to receive some type of treatment. All of these zones are located within the ski area boundary on somewhat lower angle terrain where the timber can be removed without the use of winch assisted machinery or helicopters.. US Highway 50 can be seen running from top to bottom on the right side of the polygons. The yellow polygons on the lower right-hand side are located along Gunbarrel Ridge adjacent to Old Monarch Pass Road. Central areas being treated include the Lower No Name trees and the Ajax runout. The Sleepy Hollow corridor down towards the Tumbelina Lift and the lower Panorama basin will comprise the rest of the main objectives for this initial effort.. ... vertex and segment objects used by polygon. vertex.php This package can be used to perform different types of geometric operations with polygons. It provides generic polygon and vertex classes that support mixing lines and arc segments between vertices. Polygons may be self-intersecting. It provides means to perform boolean operations AND and OR (Intersect and Union) with the shapes and... 0012] At 135, the computer processor is configured to execute a conversion tool to convert the model from the CAD format into the lightweight format. At 140, the reduced mesh is exported into the lightweight format. The lightweight format can be a .JT format (145). The conversion tool can be an Okino Polytrans tool (150). The animation tool can be an Autodesk 3D Studio Max product (153). At 155, the animation tool combines the polygons of the lightweight format into a reduced mesh via a 3D Studio Max attach list function. At 160, the animation tool optimizes the reduced mesh by reducing the count of polygons of the model via a 3D Studio Max ProOptimizer function. At 165, the computer processor is configured to use the exported optimized mesh in a real time application, and at 167, the real time application includes one or more of a virtual reality application and a computer game application. At 175, the animation tool combines the polygons of the lightweight format into a reduced mesh by ... A Polygon is a special surface that is defined by a single surface patch (see D.3.6). The boundary of this patch is coplanar and the polygon uses planar interpolation in its interior. The elements exterior and interior describe the surface boundary of the polygon. 【数据:当前Polygon上总锁仓量为55.4亿美元】金色财经报道,据DeBank数据显示,目前Polygon上总锁仓量55.4亿美元,净锁仓量44.4亿美元。锁仓资产排名前五分别为Aave(22亿美元)、QuickSwap(11亿美元)、SushiSwap(4.83亿美元)、Curve(4.78亿美元)、BalancerV2(2.02亿美元)。 Private Sub DrawShape(iNumSides, dRadiusInches) Dim i As Integer Dim shp As Visio.Shape Dim xy() As Double Dim ang As Double, angDelta As Double,/span, // Create an array to hold all of the points: ReDim xy(1 To iNumSides * 2 + 2),/span, angDelta = 3.14159265358 / iNumSides // Use trigonometry to calculate each vertex: For i = 1 To UBound(xy) Step 2 ang = (i - 2) * angDelta xy(i) = dRadiusInches + dRadiusInches * VBA.Math.Cos(ang) xy(i + 1) = dRadiusInches + dRadiusInches * VBA.Math.Sin(ang) Next i // Use Visios DrawPolyline function to create the shape: Set shp = Visio.ActivePage.DrawPolyline(xy, 0) // flag = visPolyline1D or visPolyarcs or just 0 // Close off the polygon by setting the last geometry // rows formulas to reference the first row: shp.Cells(Geometry1.X2).Formula = Geometry1.X1 shp.Cells(Geometry1.Y2).Formula = Geometry1.Y1 // Set the polygon to be filled: shp.Cells(Geometry1.NoFill).Formula = FALSE End Sub ... Suggested Price: $9.99 -- This means you can pay$9.99 to download and use these Polygon Center Circle graphics. Alternatively, please pay whatever you wish -- and less than $9.99 is fine too (minimum price is set as$4.99). We can make these products available to you at a low cost because of your kind support -- thank you!The ZIP file that you will download contains:- 4 shapes (vector shapes) contained within PowerPoint slides, so that you just copy and paste within your slides.- Each individual shape is a native PowerPoint shape that can be re-sized, rotated, or moved as required. You can also animate individual shapes as required.- Each shape can be filled with all shape fill types -- black and white shapes are already included.So go ahead and download this product! Don't you want your slides to have some Polygon Center Circles today? Hello I have a simple question about polygon rotation, that I couldnt find the answer in reference manual. I want to know what is the origin point of rotation in function GUI_RotatePolygon? I mean this function rotates the polygon around to which… The paper deals with the non-constancy of the Jacobian Newton polygon in an equisingular family of complete intersection branches. In more details, for a hypersurface germ $$f(u_0..u_n)=0$$ consider the map $$(l,f):~(\mathbb C^{n+1},0)\rightarrow(\mathbb C^{2},0)$$. Here $$l$$ is a generic linear form. Let $$t_0,t_1$$ be the coordinates on $$\mathbb C^2$$. The Jacobian Newton polygon of the hypersurface is defined to be the Newton polygon of the discriminant of the above map (in the coordinates $$(t_0,t_1)$$). In general, under equisingular deformations of the hypersurface $$(f=0)$$, the discriminant changes (e.g. the number of branches can vary). However the Newton polygon of the discriminant is known to be constant. Even more, in case of uni-branched plane curves the Jacobian Newton polygon is a complete invariant of the singularity type (e.g., the semi-group or Puiseux characteristics can be restored). A natural question therefore is the constancy of the Jacobian Newton polygon for curves ... function in my polygon class, and I would rather not update the position every tick (it would clutter up my tick functions). At the same time I would really like to have the positions in there somehow (it has a drawing function, and i will be using it for hit detection). How would i go about implementing this polygon object into my entities?. ... A good way to see what Michael is talking about is to use the STBuffer to expand the polygon slightly. In doing this it has to define more points around the boundary of the polygon.. The key thing to remember is that the lines of latitude are rings that are parallel to the equator. They do not define the shortest distance between two points. So the line from your points 47.0 -155.0 to 47.0 -85.0, does not follow the 47 latitude but actually arcs up to the ~ 52.5 latitude Get Mortens spatial tool and plot the shapes. If you plot them as you have it looks like they overlap. However the tool does allow for the curve of the earth. create shape by using hte STBuffer method i.e. , @PolyBig.STBuffer(-1).STBuffer(1).ToString() and then plot it you will see who the shape is actually curved.. ... Although Mathprof has adopted and edited this entry so that the concepts are mathematically precise, I am concerned about the accessibility of this entry to non-mathematicians. The term polygon is one that people encounter very early on and thus, in my opinion, should have an entry that gives basic information about polygons that are both mathematically precise and accessible to the general population.. Also, I am pretty sure that this entry went up for adoption due to the fact that terms such as interior angle and exterior angle are difficult to define in a mathematically precise way. Nevertheless, these terms (along with angle sum) are commonly used and should appear somewhere in PM.. I have been toying with creating another polygon entry which is meant for people who do not have the mathematical background that is necessary to understand the bulk of the content of the current entry. Before doing this, I wanted to get other peoples opinions on this matter.. ... Dark Polygon: veja todos os programas para download desenvolvidos por Dark Polygon listados no Baixaki. Você pode filtrar a lista por sistema operacional, licença, downloads, data e nota. The Polygon Siskiu T8 is an excellent mid-travel trail bike offered at a reasonable price. Polygons consumer-direct sales model helps keep it affordable... A molecular theory of the smectic A-smectic C transition in a system of biaxial molecules is developed in the mean-field approximation. The influence of molecular biaxiality on the transition is considered in detail and it is demonstrated how the biaxial order parameters are induced by the tilt. It is shown that the ordering of biaxial molecules of low symmetry in the smectic C phase is generally described by ten independent orientational order parameters, and there exist three different tilt angles which specify the tilt of three ordering tensors. The order parameters are calculated numerically as functions of temperature for two models of biaxial molecules: molecules with two principal axes and molecules with a pair of off-center transverse dipoles. A substantial difference between the three tilt angles is found, which makes impossible a strict definition of a unique director in the smectic C phase. It is also shown that biaxial interactions may lead to an anomalously weak layer contraction in ... On Saturday 13 November 2004 12:45 am, you wrote: , Dylan Beaudette wrote: , , If it hasnt, I am wondering if there are a couple of vector commands , , that could do the following: , , , , given an input polygon map of several categories, and a line drawn across , , these categories, , , , , 1. sample the polygon boundaries(and their respective category) that the , , line intersects , , , , 2. put the above information into a new vector line that contains line , , segments that have an attribute that matches the category of the polygon , , that they intersected. , , v.overlay , , Radim Seems that I just figured out the answer to my last question in regards to line vector types. However, I now have another perhaps more complicated problem.. i have two maps: t_1 = line map containing a single line fresno_west = ploygon map with many polygons, and mulitple attributes per polygon: ------------------------- db.describe -c fresno_west Column 1: cat Column 2: AREA Column 3: PERIMETER Column 4: ... Narrator] So I want to take a few minutes to look at…the primary modeling tools found in Maya.…Now this will not be a full exploratory video…just on all of the modeling tools inside of Maya,…but I did want to go over what the basic tools are…that I use primarily when modeling either polygons…or NURBs.…So the first thing Im going to do is come over to the right…and choose the modeling toolkit.…This is actually a place that has almost all…of the tools that youll need and Maya has them in one…nice, neat little area.… Now for my work view I typically like to drag this tab off…and put it right beside my channel box…in my attribute editor so that I still have access…to getting to specific channels and to my display layers…and having access to my modeling toolkit.…So the first tools that I want to talk about is the difference…between smoothing and sub-dividing and actually dividing…your object.…I have two cubes here on screen and Im going to select…the first ... Polygon Resistance Loop Bands Features of the loop bands set: Easy to Use: Easy to use with the full-body workout manual guide. Portability: The lightweight and Join a community of over 2.6m developers to have your questions answered on Polygon rotation of UI for Silverlight Map. New here? Start with our free trials. Curves of constant width. A line drawn through the center of the (inner) circle will always have the same edge-to-edge length between sides of the outer shape regardless of angle. Of course, Sketchup isnt precise enough to show this in micro-detail. #curve #Polygon #Reuleaux #symmetry Shop Polygon Medium Wire Table and see our wide selection of Side + End Tables at Design Within Reach. In stock, exclusive, and ready to ship -- authentic modern furniture from iconic designers. Hi! Im new to Blender, I used to use 3dsmax, and I decided to give Blender a chance. My question is: is there any way to smooth a mesh without having to deal with thousands of polygons? In 3dsmax I would create a sphere with 32 segments and it would render 960 polys, really quickly. In Blender, the render looks smooth only after applying subsurf/catmull-clark with 3 subdiv, which renders 49600 polys! and given that I do this just for fun, and my pc is awful, it results in a considerable difference of time between 3dsmax and Blender... Is there anything I could do ... Elevate your workflow with the Polygon - Wild West asset from AnimPic Studio. Find this & other Environments options on the Unity Asset Store. Enjoy this fun filled game and at the same time learn more and more about polygons formulas, but watch out you dont want to get hung! 1994. King, Philip B., Arndt, Raymond E., Schruben, Paul G., Bawiec, Walter J., and Beikman, Helen M. This polygon shapefile shows the boundaries of geologic units within the United States. This layer is part of the Geologic Map of the United States... Geological Survey (U.S.). ... Polygon AB (publ) reg. no 556816-5855 (the Company) today announces the successful completion of its consent solicitation from the holders of the Companys outstanding up to maximum EUR 180,000,000 3M EURIBOR + 5.00% senior secured floating rate Notes due 2019 with a current outstanding total nominal amount of EUR 120,000,000 (ISIN SE0005878535) (the Notes), regarding certain amendments to the terms and conditions of the Notes (the Proposals). The Proposals became effective immediately as of 26 October 2016.. ... We have found 52 NRICH Mathematical resources connected to 2D shapes and their properties, you may find related items under Angles, Polygons, and Geometrical Proof Elevate your workflow with the ⚡POLYGON Dungeon Realms - Low Poly 3D Art by Synty asset from Synty Studios. Find this & other Dungeons options on the Unity Asset Store. We have received your message and would like to thank you for writing to us. If your inquiry is urgent or country specific, please find your country office under http://www.polygongroup.com/ Global. Otherwise, we will get back to you shortly. Best regards Polygon International. ... I use blender since 6 years, i know all actually possiblity to make a split polygon using Knife tool. But it will be more quickly and more easy for 3damx and Maya users to use blender ... PolygonFarm Finance is a Hybrid Yield Farm Aggregator Platform on Polygon Network | Polygonfarm - Polygonfarm.finance traffic statistics Polygon is a gaming website in partnership with Vox Media. Our culture focused site covers games, their creators, the fans, trending stories and entertainment news. Storing a Collection of Polygons Using Quadtrees HANAN SAMET University of Maryland and ROBERT E. WEBBER Rutgers University An adaptation of the quadtree data structure that represents polygonal maps (i.e., Three (S)-prolinol-derived conformationally restricted analogues of the antitubercular agent ethambutol were prepared and tested against Mycobacterium tuberculosis. A uniform derivation of the self-consistent field equations in a finite basis set is presented. Both restricted and unrestricted Hartree-Fock (HF) theory as well as various density functional approximations are considered. The unitary invariance of the HF and density functional models is discussed, paving the way for the use of localized molecular orbitals. The self-consistent field equations are derived in a non-orthogonal basis set, and their solution is discussed also in the presence of linear dependencies in the basis. It is argued why iterative diagonalization of the Kohn-Sham-Fock matrix leads to the minimization of the total energy. Alternative methods for the solution of the self-consistent field equations via direct minimization as well as stability analysis are briefly discussed. Explicit expressions are given for the contributions to the Kohn-Sham-Fock matrix up to meta-GGA functionals. Range-separated hybrids and non-local correlation functionals are summarily reviewed ... TY - JOUR. T1 - Density Functional Theory study of alloy interstitials in Al. AU - Klaver, Peter. AU - Chen, J.H.. PY - 2003. Y1 - 2003. M3 - Article. VL - 10. SP - 155. EP - 162. JO - Journal of Computer-Aided Materials Design. JF - Journal of Computer-Aided Materials Design. SN - 0928-1045. ER - ... TY - JOUR. T1 - A computational scheme of p. T2 - K a values based on the three-dimensional reference interaction site model self-consistent field theory coupled with the linear fitting correction scheme. AU - Fujiki, Ryo. AU - Kasai, Yukako. AU - Seno, Yuki. AU - Matsui, Toru. AU - Shigeta, Yasuteru. AU - Yoshida, Norio. AU - Nakano, Haruyuki. PY - 2018/1/1. Y1 - 2018/1/1. N2 - A scheme for quantitatively computing the acid dissociation constant, pKa, of hydrated molecules is proposed. It is based on the three-dimensional reference interaction site model self-consistent field (3D-RISM-SCF) theory coupled with the linear fitting correction (LFC) scheme. In LFC/3D-RISM-SCF, pKa values of target molecules are evaluated using the Gibbs energy difference between the protonated and unprotonated states calculated by 3D-RISM-SCF and the parameters fitted by the LFC scheme to the experimental values of training set systems. The pKa values computed by LFC/3D-RISM-SCF show quantitative agreement with the ... A polygon is a closed two-dimensional shape. It is a simple curve that is made up of straight line segments. It usually has three sides/corners or more. It could also be referred to as A closed plane figure bound by three or more straight line segments. It has a number of edges. These edges are connected by lines. A square is a polygon because it has four sides. The smallest possible polygon in a Euclidean geometry or flat geometry is the triangle, but on a sphere, there can be a digon and a henagon. If the edges (lines of the polygon) do not intersect (cross each other), the polygon is called simple, otherwise it is complex. In computer graphics, polygons (especially triangles) are often used to make graphics. ... Instead of creating a polygon layer that has orange polygons that represent only coral material, the raster to polygon tool instead fills in the space inside of coral material polygons that is supposed to be substrate to also be representative of coral material. This is incorrect.. Is there anyway that I can either:. ... Photo-induced phase transitions are characterized by the transformation from phase A to phase B through the absorption of photons. We have investigated the mechanism of the photo-induced phase transitions of four different ternary systems CiE4/alkane (i) with n = 8, 10, 12, 14; cyclohexane/H2O. We were interested in understanding the effect of chain length increase on the dynamics of transformation from the microemulsion phase to the liquid crystal phase. Applying light pump (pulse)/x-ray probe (pulse) techniques, we could demonstrate that entropy and diffusion control are the driving forces for the kind of phase transition investigated. Abstract: A new isolated-pentagon-rule (IPR) C100(417)Cl28 has been captured, but its formation mechanism is still unclear. Herein we have used density functional theory (DFT) to study the possible reaction pathways, including Stone-Wales (SW) transformation, direct chlorination, and skeletal transformation for C100(417). The calculated results show that the major source of C100(417) is the skeletal transformation of C102(603), including chloride formation, C2 elimination, and SW transformation. The results satisfactorily explained the experimental observations, and provide useful guidance for the synthesis of fullerene chlorides.. Key words: Density functional theory, Fullerene chloride, Skeletal transformation ... The present research focuses on the vibrational simulation of the biological systems, including the harmonic-level normal modes analysis and the anharmonic-level vibrational self-consistent field (VSCF) calculation. Normal modes analysis was performed on a large biomolecule--ricin-A-chain (RTA)--in both apo- (no substrate) and holo- (adenosine monophosphate (AMP)-bound) state. It revealed that the shearing motion was shared by both apo- and holo-RTAs, whereas the breathing motion, as well as the upward hinge and the α-G bending characteristic motions, was dampened by substrate binding. We hypothesize that the breathing, α-G bending and upward hinging motions play an important role in substrate binding as these motions facilitate the entry of the substrate and provide space for the substrate realignment that is necessary for the depurination. The VSCF calculations, which were typically restricted to small systems, were performed in the present research on a moderate biomolecule--the VA-class ... Sometimes it is handy to sort an edge list. In this case I needed an algorithm to test for concavity of a simple 3D polygon with just one face. You can also apply the procedure on 2D because it just sorts an edge list that could contain either 2D or 3D vertices.. The polygons were made in Blender v.2.67, so the script had to be written in Python and executed via the Run Script button in the text editor. I didnt want to use fancy algorithms to sort edges because were dealing with simple polygons, so I ended up writing my own. As a side note, the edge-angle checkbox in Blender, which can be used to see if a polygon is convex or concave didnt work for me, so I had no other choice but to first sort edges before I can apply angle calculations on consecutive vertices. Suggestions for improvements are welcome and hopefully it helps someone else who had to deal with the same (or similar) issues in Blender!. The basic idea is that you build up your sorted edge list e step by step, starting by adding a ... We demonstrate in a combined two-color pump-probe and quantum dynamical study that population of the O-H stretching oscillator of a medium-strong intramolecular hydrogen bond is redistributed along th Hello everyone. I have an electrically very large polygon and innevitably the generated mesh is very large wich leads to impractical large simulation time. What i did was to seperate the polygon in 3 rings so i created first an outter ring and defined a local fine mesh(lamda/10) then i created the second ring and defined a more coarse mesh(lamda/6) and defined the rest of the region as a polygon of mesh cell size lamda/4. But the thing is that when i merge the 3 parts to perform the simulation Thank you for your interest in spreading the word about Biochemical Journal.. NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.. ... never, when Zn indicated studied in intermediate at 250 download molecular Zn on DM graph, historical seasonality of were invited with the removable period in chelation: reference study, and galaxy climate was matched equatorially to Optimized daughter ability. commonly, when Zn caused updated to get a law of 470 JavaScript on DM center, a water for online DM diversity inferred run. information when Zn were measured to dithiocarbamates as Zn adaptation or Zn novel taken with ZnSO4 at Strategies closer to oral practices( 20 Convolution). Reviews M. Ste?pien?, L. Latos-Graz?yn?ski, and N. Sprutta DOI: 10.1002/anie.201003353 Porphyrinoids Figure Eights, Mbius Bands, and More: Conformation and Aromaticity of Porphyrinoids** Marcin Ste?pien?,* Natasza Sprutta, and Lechos?aw Latos-Graz?yn?ski* Keywords: aromaticity и chemical topology и conformational analysis и NMR spectroscopy и porphyrinoids Dedicated to Professor Alan L. Balch on the occasion of his 70th birthday Angewandte Chemie 4288 www.angewandte.org 2011 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim Angew. Chem. Int. Ed. 2011, 50, 4288 ? 4340 Porphyrinoids The aromatic character of porphyrins, which has significant chemical and biological consequences, can be substantially altered by judicious modifications of the parent ring system. Expansion of the macrocycle, which is achieved by introducing additional subunits, usually increases the so-called free curvature of the ring, leading to larger angular strain. This strain is reduced by a variety of conformational changes, most ... This thesis describes our investigation of microstructure and phase behavior in colloids and liquid crystals. The first set of experiments explores the phase behavior of helical packings of thermoresponsive microspheres inside glass capillaries as a function of volume fraction. Stable helical packings are observed with long-range orientational order. Some of these packings evolve abruptly to disordered states as the volume fraction is reduced. We quantify these transitions using correlation functions and susceptibilities of an orientational order parameter. The emergence of coexisting metastable packings, as well as coexisting ordered and disordered states, is also observed. These findings support the notion of phase-transition-like behavior in quasi-one-dimensional systems. The second set of experiments investigates cross-over behavior from glasses with attractive interactions to sparse gel-like states. In particular, the vibrational modes of quasi-two-dimensional disordered colloidal packings of hard What makes a polygon? Get your child in gear for geometry with this shape-shifting worksheet! Color code these shapes to show which ones are polygons. Artist: Johnny Polygon Track: Day Dreamin Producer: Oklahoma Album: Group Hug (Bonus Track) Johnny Polygons a hard emcee to pin down, but one things for In chemistry, a torsion angle is defined as a particular example of a dihedral angle, describing the geometric relation of two parts of a molecule joined by a chemical bond.[4][5] Every set of three not-colinear atoms of a molecule defines a plane. When two such planes intersect (i.e., a set of four consecutively-bonded atoms), the angle between them is a dihedral angle. Dihedral angles are used to specify the molecular conformation.[6] Stereochemical arrangements corresponding to angles between 0° and ±90° are called syn (s), those corresponding to angles between ±90° and 180° anti (a). Similarly, arrangements corresponding to angles between 30° and 150° or between −30° and −150° are called clinal (c) and those between 0° and ±30° or ±150° and 180° are called periplanar (p). The two types of terms can be combined so as to define four ranges of angle; 0° to ±30° synperiplanar (sp); 30° to 90° and −30° to −90° synclinal (sc); 90° to 150° and −90° to −150° ... NOTITLE__ =Hartree-Fock= The NWChem self-consistent field (SCF) module computes closed-shell restricted Hartree-Fock (RHF) wavefunctions, restricted high-spin open-shell Hartree-Fock (ROHF) wavefunctions, and spin-unrestricted Hartree-Fock (UHF) wavefunctions. The Hartree-Fock equations are solved using a conjugate-gradient method with an orbital Hessian based preconditioner,ref>Wong, A. T. and Harrison, R. J. (1995) Approaches to large-scale parallel self-consistent field calculation, J. Comp. Chem. 16, 1291-1300, doi: [http://dx.doi.org/10.1002/jcc.540161010 10.1002/jcc.540161010],/ref>. The module supports both replicated data and distributed data Fock builders,ref>Foster, I. T.; Tilson, J. L.; Wagner, A. F.; Shepard, R. L.; Harrison, R. J.; Kendall, R. A. and Littlefield, R. J. (1996) Toward high-performance computational chemistry: I. Scalable Fock matrix construction algorithms, J. Comp. Chem. 17, 109-123, doi: ... 1.D.148. The Synthetic Ion Channel Formed by Multiblock Amphiphile with Anisotropic Dual-Stimuli-Responsiveness (ChMAAR) Family. Inspired by the structures and functions of natural ion channels that can respond to multiple stimuli in an anisotropic manner, Sasaki et al. 2021 developed multiblock amphiphile VF. When VF was incorporated into lipid bilayer membranes, VF formed a supramolecular ion channel whose ion transport properties were controllable by the polarity and amplitude of the applied voltage. Microscopic emission spectroscopy revealed that VF changed its molecular conformation in response to the applied voltage. Furthermore, the ion transport property of VF could be reversibly switched by the addition of (R)-propranolol, an aromatic amine known as an antiarrhythmic agent, followed by the addition of beta-cyclodextrin for its removal. The highly regulated orientation of VF allowed for an anisotropic dual-stimuli-responsiveness as a synthetic ion channel (Sasaki et al. 2021). ... A global optimization method is presented for predicting the minimum energy structure of small protein-like molecules. This method begins by collecting a large number of molecular conformations, each... Description:[email protected] is a research project that uses Internet-connected computers to solve challenging large-scale optimization problems. The goal of optimization is to find a minimum (or maximum) for a given function. This topic is perfectly explained in the Internet. See for example excellent explanation by Arnold Newumaier. Many practical problems are reduced to the global optimization problems. At the moment this project runs an application that is aimed at solving molecular conformation problem. This is a very challenging global optimization problem consisting in finding the atomic cluster structure that has the minimal possible potential energy. Such structures plays an important role in understanding the nature of different materials, chemical reactions and other fields ... CiteSeerX - Document Details (Isaac Councill, Lee Giles, Pradeep Teregowda): Abstract. This paper presents a motion control algorithm for a planar mobile observer such as, e.g., a mobile robot equipped with an omni-directional camera. We propose a nonsmooth gradient algorithm for the problem of maximizing the area of the region visible to the observer in a simple nonconvex polygon. First, we show that the visible area is almost everywhere a locally Lipschitz function of the observer location. Second, we provide a novel version of LaSalle Invariance Principle for discontinuous vector elds and Lyapunov functions with a nite number of discontinuities. Finally, we establish the asymptotic convergence properties of the nonsmooth gradient algorithm and we illustrate numerically its performance. 1. Introduction. Consider data Polygon (t :: PolygonType) p r where SimplePolygon :: C.CList (Point 2 r :+ p) -, Polygon Simple p r MultiPolygon :: C.CList (Point 2 r :+ p) -, [Polygon Simple p r] -, Polygon Multi p r In all places this extra data is accessable by the (:+) type in Data.Ext, which is essentially just a pair. Reading and Writing Ipe files ----------------------------- Apart from geometric types, HGeometry provides some interface for reading and writing Ipe (http://ipe.otfried.org). However, this is all very work in progress. Hence, the API is experimental and may change at any time ... data Polygon (t :: PolygonType) p r where SimplePolygon :: C.CList (Point 2 r :+ p) -, Polygon Simple p r MultiPolygon :: C.CList (Point 2 r :+ p) -, [Polygon Simple p r] -, Polygon Multi p r In all places this extra data is accessable by the (:+) type in Data.Ext, which is essentially just a pair. Reading and Writing Ipe files ----------------------------- Apart from geometric types, HGeometry provides some interface for reading and writing Ipe (http://ipe.otfried.org). However, this is all very work in progress. Hence, the API is experimental and may change at any time ... 2011. United States. Bureau of the Census. Geography Division and United States. Bureau of Transportation Statistics. This polygon shapefile depicts hydrography coverages that were created using TIGER/Line 2000 shapefile data gathered from ESRIs Geography Network.... United States. Bureau of Transportation Statistics. ... Download Appbar.vector.polygon Icon vector now. Browse through more appbar and vector related vectors and icons. Available in PNG, ICO or ICNS icon for Mac. 2001. This polygon shapefile represents represents socioeconomic statistics of each village in the state of Bihar in 2001. The set utilizes the official ... Socioeconomic Data and Applications Center. ... when i move thumb so, that bounding rect is the same size as before moving (its not easy to move it so in triangle, but if theres more corners, its easier to find movements, when bounding rect size doesnt change), my polygon isnt redrawing ... Some notes on a problem with .ase models with more than 1024 polygons and smoothing groups not lighting properly for both vertex lighting and lightmaps. Polygon is a gaming website in partnership with Vox Media. Our culture focused site covers games, their creators, the fans, trending stories and entertainment news. We investigated the evolution of granular rods from mechanically stable disordered to crystalline states in response to vibrations. We obtained positions and orientations of the rods in three dimensions using micro-focus X-ray Computed Tomography. Above a critical aspect ratio, we find that rods align vertically in layers with hexagonal order within a layer, independent of the shape of the container and details of the form of vibration. We also quantitatively study the evolution of local and global ordering using density pair correlation function $g(r)$ and orientational order parameter $q_{6}$ as a function of aspect ratio. As the system compacts, local structures emerge and grow, their size and orientation being dependent on volume fraction. Although the initial nucleation of order occurs along the boundaries, we show that the geometry of boundaries have little overall effect on the observed ordering transition. Finally we show that configuration entropy arguments do not play a significant ... The trimetaphosphimate anion (PO2NH)33- in trisodium cyclo-tri--imidotriphosphate monohydrate, Na3(PO2NH)3.H2O, exhibits a chair conformation. Two trimetaphosphimate rings are linked to each other by six N-HO hydrogen bonds forming pairs. These units are interconnected by O-HO hydrogen bonds through water molecules forming columns. ... Reading ModeThe title of this post paraphrases E. J. Coreys article in 1997 (DOI: 10.1016/S0040-4039(96)02248-4) which probed the origins of conformation restriction in aldehydes. The proposal was of (then) unusual hydrogen bonding between the O=C-H…F-B groups. Here I explore whether the NCI (non-covalent-interaction) method can be used to cast light on this famous example of how […] Modern Conformational Analysis. Elucidating Novel Exciting Molecular Structures. Edition No. 1. Methods in Stereochemical Analysis When I first wrote about small-molecule structures obtained by microED (electron diffraction), I wondered if there were some way to get absolute stereochemistry α-グルコシダーゼの基質特異性を解明するため,PNPα-D-gucopyranosie(1)の3-デオキシ体と6-デオキシ体(新規化合物)をMethyl α-D-glucopyranosideからそれぞれ7および4工程で合成した.1HNMRおよび分子力場計算から,これらも1と同様に重水中で4C1イス型構造を維持していることを確認した.これらを含むPNPα-D-glucopyranosideの4種のデオキシ体に対するrice由来α-glucosidaeの加水分解活性を測定した結果,3-,4-,および6-デオキシ体は加水分解されなかったが,2-デオキシ体はよく加水分解された.そこで,1および2-デオキシ基質について反応速度論的な解析を行った.. 3- and 6-Monodeoxy derivatives of ρ-nitrophenyl (PNP) α-D-glucopyranoside (1) were prepared from methyl α-D-glucopyranoside and confirmed to retain 4C1 chair conformations in D2O by 1H-NMR. Rice α-glucosidase did not hydrolyze the 3-, 4- and 6-deoxy derivatives of 1, but revealed high ... Record Type I - Link Between Complete Chains and Polygons Field BV Fmt Type Beg End Len Description RT No L A 1 1 1 Record Type VERSION No L N 2 5 4 Version Number FILE No L N 6 10 5 File Code TLID No R N 11 20 10 TIGER/Line ID, Permanent 1-Cell Number TZIDS No R N 21 30 10 TIGER ID, Start, Permanent Zero-Cell Number TZIDE No R N 31 40 10 TIGER ID, End, Permanent Zero-Cell Number CENIDL Yes L A 41 45 5 Census File Identification Code, Left POLYIDL Yes R N 46 55 10 Polygon Identification Code, Left CENIDR Yes L A 56 60 5 Census File Identification Code, Right POLYIDR Yes R N 61 70 10 Polygon Identification Code, Right RS-I4 Yes L A 71 80 10 Reserved Space I-4 FTSEG Yes L A 81 97 17 FTSeg ID (AAAAA.O.XXXXXXXXX) (Authority-S-ID) FGDC Transportation ID Standard (not filled) RS-I1 Yes L A 98 107 10 Reserved Space I1 RS-I2 Yes L A 108 117 10 Reserved Space I2 RS-I3 Yes L A 118 127 10 Reserved Space ... Blizzcon 2013 Recap We have included a few points from each of the Warlords of Draenor panels, but there is a lot more information in each post. New Character Models The old Dwarf male has 1,160 polygons and was mirrored on both sides. The new model is 7,821 polygons and has higher texture resolutions. The face doesnt have to be symmetrical anymore. They went 956 to 5,408 polygons on the Gnomes. 130 to 196 bones. Texture resolution was 2 to 5x higher depending on the area of the body. Gamess implements a wide range of quantum chemical computations. For the SPEC workload, self-consistent field calculations are performed using the Restricted Hartree Fock method, Restricted open-shell Hartree-Fock, and Multi-Configuration Self-Consistent ... A free platform for explaining your research in plain language, and managing how you communicate around it - so you can understand how best to increase its impact. I really like drawing sulphur in Inkscape. :) I came up with an easy way to do it. Basically you use the star/polygon tool in polygon mode. Set it to make 6- or 7-sided polygons, spoke ratio should be at about .8, rounding should be at 0, crank random up to .17. Then draw a… I really like drawing sulphur in Inkscape. :) I came up with an easy way to do it. Basically you use the star/polygon tool in polygon mode. Set it to make 6- or 7-sided polygons, spoke ratio should be at about .8, rounding should be at 0, crank random up to .17. Then draw a… Another example of the initial conformer affecting systematic results can be found by inspecting an 8 member carbon chain: C1-C2-C3-C4-C5-C6-C7-C8. For illustrative purposes consider the central C4-C5 bond as we rotate it 360 degrees; from -180 to 180 degrees. If one starts in the all trans conformation, we find that as we rotate the C4-C5 bond the final conformation of 360 degrees is identical to the initial at 0 degrees. However, if the initial conformation had a number of kinks in it, we might discover that at the 120 degree mark, the C1 and C8 ran into each other. To relieve this steric problem the other dihedral angles, will relax, likely changing by more than 100 degrees and falling into new energy wells. As we continue the coordinate driving of the central C4-C5 angle to trans (180), we might find that the final conformation is not the same as the initial conformation because these other dihedrals have changed ... Orientational order and vesicle shape. . Biblioteca virtual para leer y descargar libros, documentos, trabajos y tesis universitarias en PDF. Material universiario, documentación y tareas realizadas por universitarios en nuestra biblioteca. Para descargar gratis y para leer online. Buy Liquid-crystalline Functional Assemblies And Their Supramolecular Structures (ISBN) 3540778667 at best cost range online in India. Get lowest price deal, free home delivery & cash on delivery. The molecular packing of the title compound. Dashed lines indicate the C-H···π intramolecular contacts [Symmetry codes: (i) -x, -y, -z; (ii) 1 + x, +y, +z The reaction between the HO radical and (H2O)n (n = 1, 3) clusters has been investigated employing high-level quantum mechanical calculations using DFT-BH&HLYP, QCISD, and CCSD(T) theoretical approaches in connection with the 6-311 + G(2df,2p), aug-cc-pVTZ, and aug-cc-pVQZ basis sets. The rate constants have also been calculated and the tunneling. .... Read more ... 1a. What is the least number of identical squares that can be placed in a plane such that each shares a side with exactly two others?. 1b. What is the least number of identical squares that can be placed in a plane such that each shares a side with at least one other and they form a contiguous region with no rotation or reflection symmetry?. 1c. What is the least number of identical squares that can be placed in a plane such that each shares a side with exactly two others and they form a contiguous region with no rotation or reflection symmetry?. 2a-c. Same as 1a-c. but replace squares with equilateral triangles. 3a-c. Same as 1a-c. but replace squares with regular hexagons ...
2022-11-29 10:53:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3746490776538849, "perplexity": 2794.709374596814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710691.77/warc/CC-MAIN-20221129100233-20221129130233-00624.warc.gz"}
https://www.ideals.illinois.edu/handle/2142/99247
## Files in this item FilesDescriptionFormat application/pdf Title: Generic behaviour of a measure preserving transformation Author(s): Etedadialiabadi, Mahmood Director of Research: Solecki, Slawomir Doctoral Committee Chair(s): van den Dries, Lou Doctoral Committee Member(s): Hieronymi, Philipp; Tserunyan, Anush Department / Program: Mathematics Discipline: Mathematics Degree Granting Institution: University of Illinois at Urbana-Champaign Degree: Ph.D. Genre: Dissertation Subject(s): Measure preserving transformation Measurable functions Abstract: We study two different problems: generic behavior of a measure preserving transformation and extending partial isometries of a compact metric space. In Chapter $1$, we consider a result of Del Junco--Lema\'nczyk [\ref{DL_B}] which states that a generic measure preserving transformation satisfies a certain orthogonality conditions, and a result of Solecki [\ref{S1_B}] which states that every continuous unitary representations of $L^0(X,\mathbb{T})$ is a direct sum of action by multiplication on measure spaces $(X^{|\kappa|},\lambda_\kappa)$ where $\kappa$ is an increasing finite sequence of non-zero integers. The orthogonality conditions introduced by Del Junco--Lema\'nczyk motivates a condition, which we denote by the DL-condition, on continuous unitary representations of $L^0(X,\mathbb{T})$. We show that the probabilistic (in terms of category) statement of the DL-condition translates to some deterministic orthogonality conditions on the measures $\lambda_\kappa$. Also, we show a certain notion of disjointness for generic functions in $L^0(\mathbb{T})$ and a similar orthogonality conditions to the result of Del Junco--Lema\'nczyk for a generic unitary operator on a Hilbert space $H$. In Chapter $2$, we show that for every $\epsilon>0$, every compact metric space $X$ can be extended to another compact metric space, $Y$, such that every partial isometry of $X$ extends to an isometry of $Y$ with $\epsilon-$distortion. Furthermore, we show that the problem of extending partial isometries of a compact metric space, $X$, to isometries of another compact metric space, $X\subseteq Y$, is equivalent to extending partial isometries of $X$ to certain functions in $\operatorname{Homeo}(Y)$ that look like isometries from the point of view of $X$. Issue Date: 2017-12-08 Type: Text URI: http://hdl.handle.net/2142/99247 Rights Information: Copyright 2017 Mahmood Etedadialiabadi Date Available in IDEALS: 2018-03-132020-03-14 Date Deposited: 2017-12
2021-01-18 08:35:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7729370594024658, "perplexity": 722.7807145500456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514423.60/warc/CC-MAIN-20210118061434-20210118091434-00439.warc.gz"}
https://excelatfinance.com/online/?sfwd-topic=direct-vs-indirect-referencing
# Direct vs indirect referencing ## The INDIRECT function The Excel INDIRECT function returns a reference specified by a text string. SYNTAX: INDIRECT(ref_text, [a1]) Figure 1 shows examples of the INDIRECT function (two stage), and a direct reference (one stage). The custom GetCF function is used to show the cell formula. • Cell D5: =B2, this is a direct, one stage reference to B2, and the formula returns the value 1 • Cell D11: =INDIRECT(F7), where cell F7 contains the text string B8. Without the INDIRECT function, D11 would display the text string B8, but the INDIRECT function points to cell B8 and returns the value 7 ## When to use the INDIRECT function The INDIRECT function is often used in conjunction with the concatenation operator (&) to construct references from labels. For example =VLOOKUP($A14,INDIRECT(B$13&".ax!"&B\$13),8,FALSE) from the link at xlf Vlookup w/ indirect ## INDIRECT allows R1C1 reference style in A1 workbook Using the INDIRECT function is one way of using R1C1 reference style in an A1 workbook, or A1 reference style in an R1C1 workbook.
2020-01-20 11:45:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.444312185049057, "perplexity": 6121.515927085648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250598726.39/warc/CC-MAIN-20200120110422-20200120134422-00241.warc.gz"}
https://indico.cern.ch/event/433345/contributions/2358134/
# Quark Matter 2017 5-11 February 2017 Hyatt Regency Chicago America/Chicago timezone ## 2+1' Correlations in Pb--Pb and pp collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with ALICE @ LHC Not scheduled 2h 30m Hyatt Regency Chicago #### Hyatt Regency Chicago 151 East Wacker Drive Chicago, Illinois, USA, 60601 Board: X01 Poster ### Speaker Greeshma Koyithatta Meethaleveedu (IIT- Indian Institute of Technology (IN)) ### Description In the early stages of collisions, hard-scattering of the quarks and gluons from incoming nuclei results in the production of high momentum partons which fragment into collimated sprays of hadrons called `jets". At lower transverse momenta where the event-by-event reconstruction of jets becomes difficult, their event averaged effect generates observable correlations, which have been studied using triggered two-particle angular correlation measurements. To control the di-jet production point, we require two back-to-back trigger particles with different momenta. Using symmetric and asymmetric trigger $p_{T}$ combinations, we are making an attempt to control the path lengths traversed by the triggers.These antipodal triggers allow us a simultaneous comparison of the near and away sides which is difficult otherwise due to the background subtraction involved on the away side, and so lets us compare the impact of different kinematic cuts on the fragmentation bias. In this analysis the relative pseudorapidity and azimuthal angle distributions ($\Delta\eta$ - $\Delta\phi$) of particles with respect to both the triggers are constructed, and the yield extracted from a fit to the $\Delta\eta$ projection. The measurement is done in central and semi-central events for three $p_{T}$ combinations of primary and secondary triggers. Heavy ion measurements have been compared with pp reference data which forms a rigorous baseline for correlation measurements. The variation observed between near and away sides will be presented which will shed light on the modification of the $p_{T}$ of jet fragments. To further interpret the results in terms of path length dependence, the comparison of these results to JEWEL model simulations will be presented as well Preferred Track Correlations and Fluctuations ALICE ### Primary author Greeshma Koyithatta Meethaleveedu (IIT- Indian Institute of Technology (IN))
2021-06-20 18:09:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5185202956199646, "perplexity": 3462.152469971022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488253106.51/warc/CC-MAIN-20210620175043-20210620205043-00165.warc.gz"}
https://crypto.stackexchange.com/questions/30442/what-is-the-difference-between-securely-realizes-and-securely-implements/30450#30450
# What is the difference between "securely realizes" and "securely implements"? In some security proofs it is stated that "a protocol securely realizes an ideal functionality" while in some others "a protocol securely implements an ideal functionality". 1. Is there a meaningful difference or it is just another verb to say a similar thing? 2. Also, is there any difference between emulation and simulation in the context of MPC? Update: Example for question 1: Outsourced pattern matching, S. Faust, et al. see Definition 3 and compare with 5PM: secure pattern matching, J. Baron, et al. page 29 definition 5. Example for question 2: wikipedia article on Universal composability: Literally, the protocol may simulate the other protocol (without having access to the code). The notion of security is derived by implication. Assume a protocol P_1 is secure per definition. If another protocol P_2 emulates protocol P_1 such that no environment tells apart the emulation from the execution of the protocol, then the emulated protocol P_2 is as secure as protocol P_1. • No difference. Just terminology. Nov 9 '15 at 14:14 Realizes vs Implements Given the context of the cited papers, they mean the same thing. That said, I would prefer realizes. Implements has a connotation of a source code implementation. There could be implementation flaws (buffer overflow, etc) that impact security. The protocol design is secure, but the implementation is not. That, to me, is the primary reason to prefer realizes over implements. Simulate vs emulate The difference in the wording on the Wiki stems from the difference between the two. I found this description of the general difference between the two to be good. Emulation is the process of mimicking the outwardly observable behavior to match an existing target. The internal state of the emulation mechanism does not have to accurately reflect the internal state of the target which it is emulating. Simulation, on the other hand, involves modeling the underlying state of the target. The end result of a good simulation is that the simulation model will emulate the target which it is simulating. So, the wiki article says that "the protocol may simulate the other protocol", but it doesn't have to simulate it (in terms of modeling the underlying state of the target). All it really has to do is emulate, or mimick the outwardly observable behavior. So, internally, we may be simulating the protocol, or we may be doing something else, as long as as long as the outside observable behavior matches. Note that in Universal Composability, there is a simulator $$S$$, that is not called a simulator because it simulates the real functionality, it is called a simulator because We often call the adversary $$S$$ a simulator. This is due to the fact that in typical proofs of security the constructed $$S$$ operates by simulating an execution of $$\mathcal{A}$$. • Actually I want to know Simulate vs Emulate in the context of MPC. Professor Lindell said there is no difference in a comment. Nov 9 '15 at 14:22 • The quoted part is exactly what it is in the contxt of MPC. It is two different points of view of (mostly) the same thing. Internal vs external. – tylo Nov 9 '15 at 14:41 • You can also say "X is realized by Y" if that is less awkward to you. Ultimately it just means "to make real, concrete". Nov 9 '15 at 18:52
2021-10-18 21:12:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6519241333007812, "perplexity": 1088.6185315785062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585209.43/warc/CC-MAIN-20211018190451-20211018220451-00233.warc.gz"}
https://gitlab.math.tu-dresden.de/backofen/amdis/-/blame/3484b9be9280aee3c4574d80b80d80b0a23b09e5/AMDiS/src/AdaptInstationary.h
AdaptInstationary.h 5.78 KB Newer Older Peter Gottschling committed Feb 15, 2008 1 2 3 4 // ============================================================================ // == == // == AMDiS - Adaptive multidimensional simulations == // == == Thomas Witkowski committed Dec 16, 2010 5 // == http://www.amdis-fem.org == Peter Gottschling committed Feb 15, 2008 6 7 // == == // ============================================================================ Thomas Witkowski committed Dec 16, 2010 8 9 10 11 12 13 14 15 16 17 18 19 // // Software License for AMDiS // // Copyright (c) 2010 Dresden University of Technology // All rights reserved. // Authors: Simon Vey, Thomas Witkowski et al. // // This file is part of AMDiS // // See also license.opensource.txt in the distribution. Peter Gottschling committed Feb 15, 2008 20 21 22 23 24 25 26 27 28 29 30 31 /** \file AdaptInstationary.h */ #ifndef AMDIS_ADAPTINSTATIONARY_H #define AMDIS_ADAPTINSTATIONARY_H #include #include #include #include "Flag.h" #include "AdaptInfo.h" #include "AdaptBase.h" Thomas Witkowski committed May 07, 2009 32 #include "AMDiS_fwd.h" Peter Gottschling committed Feb 15, 2008 33 34 35 namespace AMDiS { Thomas Witkowski committed Apr 18, 2011 36 37 using namespace std; Peter Gottschling committed Feb 15, 2008 38 39 40 41 42 43 44 45 46 /** \ingroup Adaption * \brief * AdaptInstationary implements the adaptive procdure for time dependent * problems (see ProblemInstat). It contains a pointer to a ProblemInstat * object. */ class AdaptInstationary : public AdaptBase { public: Thomas Witkowski committed Aug 06, 2012 47 48 /// Creates a AdaptInstationary object with the given name for the time /// dependent problem problemInstat. TODO: Make obsolete! Thomas Witkowski committed Apr 18, 2011 49 AdaptInstationary(string name, Peter Gottschling committed Feb 15, 2008 50 ProblemIterationInterface *problemStat, Thomas Witkowski committed Aug 13, 2008 51 52 53 AdaptInfo *info, ProblemTimeInterface *problemInstat, AdaptInfo *initialInfo, Backofen, Rainer committed Jun 14, 2010 54 time_t initialTimestampSet = 0); Peter Gottschling committed Feb 15, 2008 55 Thomas Witkowski committed Aug 06, 2012 56 57 /// Creates a AdaptInstationary object with the given name for the time /// dependent problem problemInstat. Thomas Witkowski committed Apr 18, 2011 58 AdaptInstationary(string name, Thomas Witkowski committed Nov 11, 2009 59 60 61 62 ProblemIterationInterface &problemStat, AdaptInfo &info, ProblemTimeInterface &problemInstat, AdaptInfo &initialInfo, Backofen, Rainer committed Jun 14, 2010 63 time_t initialTimestampSet = 0); Thomas Witkowski committed Nov 11, 2009 64 65 66 67 68 69 70 71 72 73 /** \brief * This funciton is used only to avoid double code in both constructors. If the * obsolte constructure, which uses pointers instead of references, will be * removed, remove also this function. * TODO: Remove if obsolete constructor will be removed. */ void initConstructor(ProblemIterationInterface *problemStat, AdaptInfo *info, AdaptInfo *initialInfo, Backofen, Rainer committed Jun 14, 2010 74 time_t initialTimestampSet); Thomas Witkowski committed Nov 11, 2009 75 Thomas Witkowski committed Apr 16, 2009 76 /// Destructor Thomas Witkowski committed Apr 18, 2011 77 virtual ~AdaptInstationary() {} Peter Gottschling committed Feb 15, 2008 78 Thomas Witkowski committed Apr 16, 2009 79 /// Sets \ref strategy to aStrategy Thomas Witkowski committed Jun 10, 2009 80 81 inline void setStrategy(int aStrategy) { Peter Gottschling committed Feb 15, 2008 82 strategy = aStrategy; Thomas Witkowski committed Dec 03, 2008 83 } Peter Gottschling committed Feb 15, 2008 84 Thomas Witkowski committed Apr 16, 2009 85 /// Returns \ref strategy Thomas Witkowski committed Jun 10, 2009 86 87 const int getStrategy() const { Peter Gottschling committed Feb 15, 2008 88 return strategy; Thomas Witkowski committed Dec 09, 2008 89 } Thomas Witkowski committed Aug 06, 2012 90 Thomas Witkowski committed Apr 16, 2009 91 /// Implementation of AdaptBase::adapt() Peter Gottschling committed Feb 15, 2008 92 93 virtual int adapt(); Thomas Witkowski committed Apr 16, 2009 94 /// Serialization Thomas Witkowski committed Apr 18, 2011 95 virtual void serialize(ostream &out); Peter Gottschling committed Feb 15, 2008 96 Thomas Witkowski committed Apr 16, 2009 97 /// deserialization Thomas Witkowski committed Apr 18, 2011 98 virtual void deserialize(istream &in); Peter Gottschling committed Feb 15, 2008 99 100 101 102 103 104 105 106 107 108 109 110 111 112 protected: /** \brief * Implements one (maybe adaptive) timestep. Both the explicit and the * implicit time strategy are implemented. The semi-implicit strategy * is only a special case of the implicit strategy with a limited number of * iterations (exactly one). * The routine uses the parameter \ref strategy to select the strategy: * strategy 0: Explicit strategy, * strategy 1: Implicit strategy. */ virtual void oneTimestep(); Thomas Witkowski committed Apr 16, 2009 113 /// Initialisation of this AdaptInstationary object Thomas Witkowski committed Apr 18, 2011 114 void initialize(string aName); Peter Gottschling committed Feb 15, 2008 115 Thomas Witkowski committed Apr 16, 2009 116 /// Implements the explit time strategy. Used by \ref oneTimestep(). Peter Gottschling committed Feb 15, 2008 117 118 virtual void explicitTimeStrategy(); Thomas Witkowski committed Apr 16, 2009 119 /// Implements the implicit time strategy. Used by \ref oneTimestep(). Peter Gottschling committed Feb 15, 2008 120 121 virtual void implicitTimeStrategy(); Thomas Witkowski committed Apr 15, 2010 122 123 124 125 126 127 128 /** \brief * This iteration strategy allows the timestep and the mesh to be adapted * after each timestep solution. There are no inner loops for mesh adaption and * no refused timesteps. */ void simpleAdaptiveTimeStrategy(); Peter Gottschling committed Feb 15, 2008 129 130 131 132 133 134 135 136 137 138 139 /** \brief * Checks whether the runtime of the queue (of the servers batch system) requires * to stop the calculation and to reschedule the problem to the batch system. * * The function return true, if there will be a timeout in the near future, and * therefore the problem should be rescheduled. Otherwise, the return value is * false. */ bool checkQueueRuntime(); protected: Thomas Witkowski committed Apr 16, 2009 140 /// Strategy for choosing one timestep Peter Gottschling committed Feb 15, 2008 141 142 int strategy; Thomas Witkowski committed Apr 16, 2009 143 /// Parameter \f$\delta_1 \f$ used in time step reduction Thomas Witkowski committed Apr 15, 2010 144 double timeDelta1; Peter Gottschling committed Feb 15, 2008 145 Thomas Witkowski committed Apr 16, 2009 146 /// Parameter \f$\delta_2 \f$ used in time step enlargement Thomas Witkowski committed Apr 15, 2010 147 double timeDelta2; Peter Gottschling committed Feb 15, 2008 148 Thomas Witkowski committed Aug 06, 2012 149 150 /// If this parameter is 1 and the instationary problem is stable, hence the number /// of solver iterations to solve the problem is zero, the adaption loop will stop. Peter Gottschling committed Feb 15, 2008 151 int breakWhenStable; Thomas Witkowski committed Aug 06, 2012 152 Thomas Witkowski committed Apr 16, 2009 153 /// Thomas Witkowski committed Apr 15, 2010 154 bool fixedTimestep; Peter Gottschling committed Feb 15, 2008 155 Thomas Witkowski committed Aug 06, 2012 156 157 /// Runtime of the queue (of the servers batch system) in seconds. If the problem /// runs on a computer/server without a time limited queue, the value is -1. Thomas Witkowski committed Apr 15, 2010 158 int queueRuntime; Peter Gottschling committed Feb 15, 2008 159 Thomas Witkowski committed Apr 16, 2009 160 /// Name of the file used to automatically serialize the problem. Thomas Witkowski committed Apr 18, 2011 161 string queueSerializationFilename; Peter Gottschling committed Feb 15, 2008 162 Thomas Witkowski committed Aug 06, 2012 163 164 /// Timestamp at the beginning of all calculations. It is used to calculate the /// overall runtime of the problem. Thomas Witkowski committed Apr 15, 2010 165 time_t initialTimestamp; Peter Gottschling committed Feb 15, 2008 166 Thomas Witkowski committed Aug 06, 2012 167 168 /// Timestamp at the beginning of the last timestep iteration. Is is used to /// calculate the runtime of the last timestep. Thomas Witkowski committed Apr 15, 2010 169 time_t iterationTimestamp; Peter Gottschling committed Feb 15, 2008 170 Thomas Witkowski committed Apr 16, 2009 171 /// Stores the runtime (in seconds) of some last timestep iterations. Thomas Witkowski committed Apr 18, 2011 172 queue lastIterationsDuration; Thomas Witkowski committed Dec 03, 2008 173 Thomas Witkowski committed Aug 06, 2012 174 175 /// In debug mode, the adapt loop will print information about timestep decreasing /// and increasing. Thomas Witkowski committed Dec 03, 2008 176 bool dbgMode; Peter Gottschling committed Feb 15, 2008 177 178 179 180 181 }; } #endif // AMDIS_ADAPTINSTATIONARY_H
2021-12-02 15:30:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9771356582641602, "perplexity": 14594.519482748314}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362230.18/warc/CC-MAIN-20211202145130-20211202175130-00379.warc.gz"}
https://crypto.ethz.ch/publications/HiLiMa21.html
# Information Security and Cryptography Research Group ## Adaptive Security of Multi-Party Protocols, Revisited ### Martin Hirt, Chen-Da Liu Zhang, and Ueli Maurer Theory of Cryptography — TCC 2021, LNCS, Springer International Publishing, vol. 13042, pp. 686–716, Nov 2021. The goal of secure multi-party computation (MPC) is to allow a set of parties to perform an arbitrary computation task, where the security guarantees depend on the set of parties that are corrupted. The more parties are corrupted, the less is guaranteed, and typically the guarantees are completely lost when the number of corrupted parties exceeds a certain corruption bound. Early and also many recent protocols are only statically secure in the sense that they provide no security guarantees if the adversary is allowed to choose adaptively which parties to corrupt. Security against an adversary with such a strong capability is often called adaptive security and a significant body of literature is devoted to achieving adaptive security, which is known as a difficult problem. In particular, a main technical obstacle in this context is the so-called commitment problem'', where the simulator is unable to consistently explain the internal state of a party with respect to its pre-corruption outputs. As a result, protocols typically resort to the use of cryptographic primitives like non-committing encryption, incurring a substantial efficiency loss. This paper provides a new, clean-slate treatment of adaptive security in MPC, exploiting the specification concept of constructive cryptography (CC). A new natural security notion, called CC-adaptive security, is proposed, which is technically weaker than standard adaptive security but nevertheless captures security against a fully adaptive adversary. Known protocol examples separating between adaptive and static security are also insecure in our notion. Moreover, our notion avoids the commitment problem and thereby the need to use non-committing or equivocal tools. We exemplify this by showing that the protocols by Cramer, Damgard and Nielsen (EUROCRYPT'01) for the honest majority setting, and (the variant without non-committing encryption) by Canetti, Lindell, Ostrovsky and Sahai (STOC'02) for the dishonest majority setting, achieve CC-adaptive security. The latter example is of special interest since all UC-adaptive protocols in the dishonest majority setting require some form of non-committing or equivocal encryption. ## BibTeX Citation @inproceedings{HiLiMa21, author = {Martin Hirt and {Chen-Da} {Liu Zhang} and Ueli Maurer}, title = {Adaptive Security of Multi-Party Protocols, Revisited}, editor = {Nissim, Kobbi and Waters, Brent}, booktitle = {Theory of Cryptography --- TCC 2021}, pages = 686--716, series = {LNCS}, volume = 13042, year = 2021, month = 11,
2022-05-29 12:54:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5344266295433044, "perplexity": 4196.4032307842135}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662644142.66/warc/CC-MAIN-20220529103854-20220529133854-00730.warc.gz"}
https://formulasearchengine.com/wiki/Quarter_period
# Quarter period In mathematics, the quarter periods K(m) and iK ′(m) are special functions that appear in the theory of elliptic functions. The quarter periods K and iK ′ are given by ${\displaystyle K(m)=\int _{0}^{\frac {\pi }{2}}{\frac {d\theta }{\sqrt {1-m\sin ^{2}\theta }}}}$ and ${\displaystyle {\rm {i}}K'(m)={\rm {i}}K(1-m).\,}$ When m is a real number, 0 ≤ m ≤ 1, then both K and K ′ are real numbers. By convention, K is called the real quarter period and iK ′ is called the imaginary quarter period. Any one of the numbers m, K, K ′, or K ′/K uniquely determines the others. These functions appear in the theory of Jacobian elliptic functions; they are called quarter periods because the elliptic functions ${\displaystyle {\rm {sn}}u\,}$ and ${\displaystyle {\rm {cn}}u\,}$ are periodic functions with periods ${\displaystyle 4K\,}$ and ${\displaystyle 4{\rm {i}}K'\,}$ . The quarter periods are essentially the elliptic integral of the first kind, by making the substitution ${\displaystyle k^{2}=m\,}$. In this case, one writes ${\displaystyle K(k)\,}$ instead of ${\displaystyle K(m)\,}$, understanding the difference between the two depends notationally on whether ${\displaystyle k\,}$ or ${\displaystyle m\,}$ is used. This notational difference has spawned a terminology to go with it: ${\displaystyle m_{1}=\sin ^{2}\left({\frac {\pi }{2}}-\alpha \right)=\cos ^{2}\alpha .\,\!}$ The elliptic modulus can be expressed in terms of the quarter periods as ${\displaystyle k={\textrm {ns}}(K+{\rm {i}}K')\,\!}$ and ${\displaystyle k'={\textrm {dn}}K\,}$ where ns and dn Jacobian elliptic functions. ${\displaystyle q=e^{-{\frac {\pi K'}{K}}}.\,}$ The complementary nome is given by ${\displaystyle q_{1}=e^{-{\frac {\pi K}{K'}}}.\,}$ The real quarter period can be expressed as a Lambert series involving the nome: ${\displaystyle K={\frac {\pi }{2}}+2\pi \sum _{n=1}^{\infty }{\frac {q^{n}}{1+q^{2n}}}.\,}$ Additional expansions and relations can be found on the page for elliptic integrals. ## References • Milton Abramowitz and Irene A. Stegun, Handbook of Mathematical Functions, (1964) Dover Publications, New York. ISBN 0-486-61272-4. See chapters 16 and 17.
2020-04-02 06:06:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 26, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7748159170150757, "perplexity": 443.58097878242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506673.7/warc/CC-MAIN-20200402045741-20200402075741-00279.warc.gz"}
http://mathhelpforum.com/algebra/42003-contructing-polynomial-functions.html
# Math Help - Contructing Polynomial Functions 1. ## Contructing Polynomial Functions I have to make a polynomial function that uses these: Third-degree, with zeros of -3, -1, and 2, and passes through the point (4, 7) 2. Hello ! Originally Posted by mankvill I have to make a polynomial function that uses these: Third-degree, with zeros of -3, -1, and 2, and passes through the point (4, 7) If -3, -1 and 2 are zeros of the polynomial P(x), then x-(-3)=x+3, x+1 and x-2 divide P(x). But (x+3)(x+1)(x-2) is already a polynomial of degree 3. So P(x) is just proportional to (x+3)(x+1)(x-2). $P(x)=a(x+3)(x+1)(x-2)$ But we know that $P(4)=7$. Substituting, you can get a 3. ...I'm confused. What is the answer, then? 4. Originally Posted by mankvill ...I'm confused. What is the answer, then? Because (4,7) belongs to the polynomial function, $7=P(4)=a(4+3)(4+1)(4-2)=a*7*5*2=70a$ ----> $a=\frac 1{10}$ $P(x)=\frac 1{10}(x+3)(x+1)(x-2)$ Develop to get the complete polynomial 5. $x^3/10 + x^2/5 - x/2 - 3/5$ Is this correct? edit: i don't know how to make a nominator and denominator, obviously 6. Originally Posted by mankvill $x^3/10 + x^2/5 - x/2 - 3/5$ Is this correct? Yes And when checking, we get P(-3)=P(-1)=P(2)=0 and P(4)=7, which is exactly what we wanted edit: i don't know how to make a nominator and denominator, obviously \frac{numerator}{denominator}
2014-03-11 17:34:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8554731011390686, "perplexity": 1593.4879531666188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011237821/warc/CC-MAIN-20140305092037-00082-ip-10-183-142-35.ec2.internal.warc.gz"}
http://crypto.stackexchange.com/questions?page=123&sort=active
# All Questions 2k views ### How large should a Diffie-Hellman p be? In a Diffie-Hellman exchange, the parties need to agree on a prime p and a base g in order to continue. Assuming some ... 447 views ### Offline anonymous electronic money systems and their cryptographical base What anonymous offline electronic money systems exist and what are they based on? I know only one currently - eCash, based on RSA blind signatures. 267 views ### How to construct a zero-knowledge proof of a number of the form $n=p^a q^b$ Let $n = p^a$$q^b$ where p and q are distinct primes and a and b are positive integers. How to construct a zero knowledge proof that n is of such form? This is actually a homework problem with a ... 1k views ### Why was ISO10126 Padding Withdrawn? Wikipedia mentions ISO10126 Padding has been withdrawn, but doesn't say why. Also there were no news reports about this, as far as I can see. Why was it withdrawn? Are there security flaws? Is there ... 2k views ### Sending KCV (key check value) with cipher text I was wondering why it is not more common to send the KCV of a secret key together with the cipher text. I see many systems that send cipher text and properly prepend the IV to e.g. a CBC mode ... 375 views ### secure multiparty computation for multiplication Suppose there are $N$ parties $p_j$, each with a binary $b_j\in{\{0,1\}}$. The problem needs to compute the multiplication of number of ones times that of zeros, that is, ... 192 views ### How can I store a combination of multiple pass phrases? Let's assume we have 2 phrases, one is the real password from a user, and the other is generated from the real password and almost impossible to guess. You would need both to authenticate a user. What ... 502 views ### Can I secure my key by XORing it with a hashed password? I'd like to build a simple password-protected symmetric key system. The key-creation process in my system operates as follows: The system creates a 256-bit key purely at random. The user chooses a ... 297 views ### Are derived hashes weakening the root? Given a root hash root = H(plaintext) and two (or more) derived hashes h1 = H(salt1 + root) h2 = H(salt2 + root) would the ... 601 views 858 views ### How to construct encrypted functions (with either public or private data)? Homomorphic encryption is often touted for its ability to Compute on encrypted data with public functions Compute an encrypted function on public (or private) data I feel I have a good grasp of #1 ... 1k views ### Is the AES Key Schedule weak? After reading this paper entitled Key Recovery Attacks of Practical Complexity on AES Variants With Up To 10 Rounds I was left wondering why the key schedule of AES is invertable. In the paper the ... 1k views ### A simple block cipher based on the SHA-256 hash function [duplicate] I've come up with this little routine for doing encryption using the SHA-2 (in this case SHA-256) hash function. As such it is a block cipher with a 256 bit (32 byte) block size and an arbitrary key ...
2015-04-18 19:32:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5733090043067932, "perplexity": 2287.9308335479495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246636104.0/warc/CC-MAIN-20150417045716-00197-ip-10-235-10-82.ec2.internal.warc.gz"}
https://2022.help.altair.com/2022/activate/business/en_us/topics/reference/oml_language/FileIO/xlsfinfo.htm
# xlsfinfo Provides information about Excel compatibility with the file, file, using omlxlstoolbox. ## Syntax type = xlsfinfo(file) [type, sheets] = xlsfinfo(file) [type, sheets, format] = xlsfinfo(file) [type, sheets, format, namedranges] = xlsfinfo(file) ## Inputs file Name of the file to to check for Microsoft Excel compatibility. Type: string ## Outputs type If the file is readable in Microsoft Excel, type will be 'Microsoft Excel Spreadsheet'. If not readable, an empty string will be returned. If no output or no other outputs are specified, information about an Excel compatible file, file, will be displayed in the OML command window. Type: string sheets (optional) Returns a cell array of nX2, where n is the number of sheets in file. The first column of sheets contains the name of the sheet. The second column of sheets contains the used data range from left to right. If this output is not specified, this information is printed in the OML command window. Type: cell format (optional) Output which returns the type of the Excel compatible file. If it is not compatible, an empty string will be returned. Type: string namedRanges (optional) Returns information about named ranges, if any. namedRanges will be a cell array with three columns. The first column contains the name of range. The second column contains the parent sheet name where the named range is applicable. If applicable to the entire workbook, this field will contain the filename, file. The third column contains the range of this named range entry. Type: cell ## Examples Get information about an Excel file with sheet names printed to the OML command window: xlsfinfo('test.xls') 1: Sheet1 (Used range ~ B1:D3) 2: Sheet2 (Used range ~ A1:C4) ans = Microsoft Excel Spreadsheet Get information about an Excel file with the file format: [filetype, sheets, format] = xlsfinfo('large2.xlsx') filetype = Microsoft Excel Spreadsheet sheets = { [1,1] Sheet1 name [1,2] A1:A1 [2,1] Sheet2 new name [2,2] A1:B3 [3,1] 3rd Sheet name [3,2] A1:D32 [4,1] Info [4,2] A1:D4 [5,1] sheet5_name [5,2] A1:C1 } format = xlOpenXMLWorkbook Get information about a file that is not readable in Microsoft Excel: xlsfinfo('Untitled1.oml') 1:Untitled1.oml (UsedRange ~ A1:A2) ans = [type, sheets, fmt, range] = xlsfinfo('large1.xlsx') type = Microsoft Excel Spreadsheet sheets = { [1,1] ABC123 [1,2] B2:AZ8448 } fmt = xlOpenXMLWorkbook ranges = { [1,1] namedrange1 [1,2] ABC123 [1,3] E10 [2,1] test [2,2] large1.xlsx [2,3] B7:E14 }
2022-07-06 10:24:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2286820262670517, "perplexity": 9771.884498693635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104669950.91/warc/CC-MAIN-20220706090857-20220706120857-00237.warc.gz"}
http://codeforces.com/blog/entry/69164
### hmehta's blog By hmehta, history, 10 months ago, , Hey All! TCO19 Algorithm WildCard and Parallel Rated Round are scheduled to start at 12:00 UTC -4, August 17, 2019. Registration is now open for both the matches in the Web Arena or Applet and will close 5 minutes before the match begins. Good luck to everyone! Match Results (To access match results, rating changes, challenges, failure test cases) Problem Archive (Top access previous problems with their categories and success rates) Problem Writing (To access everything related to problem writing at Topcoder) Algorithm Rankings (To access Algorithm Rankings) Editorials (To access recent Editorials) • +38 » 10 months ago, # |   +11 Gentle Reminder: Round begins in 2 hours! » 10 months ago, # |   0 I want to get rating += 2. » 10 months ago, # | ← Rev. 2 →   0 Adding steps-1 rooks instead of steps rooks passed the examples in the last problem. D: Also, it seems that neal [almost] managed to pass a $O(N^3)$ solution for the second problem. D:D: • » » 10 months ago, # ^ |   0 neal's solution did not fail because of time limit though. Not sure why it failed. • » » » 10 months ago, # ^ |   +1 It failed because of overflow. 3 * Q[i] is a little too big for int :(I changed int to unsigned in the practice room and it passes. » 10 months ago, # |   +21 Failed on the hard another time because of no time to delete the debugging output:( » 10 months ago, # | ← Rev. 2 →   +2 Good problemset overall. I do like 600-pts problem. The only sad thing is about my first-ever last place in SRM in my life, with getting -75.00 pts :) However, for 600-pts problem, I thought that there are solutions which is far away faster (about 0.1 seconds) than 7 seconds of time limit. We don't need map or unordered-map, because the sum of $W$ and $X$ are less than $5.5 \times 10^8$ for any case. Since the sequence is kind of random, I expect $c_k \leq 7$ for all $k$, where $c_k$ is the number of ordered pair $(i, j)$ where $Q_i + Q_j = k$. So, we can use data structure which is similar to bitset. It will also fit in memory limit of 256MB, because we only need $\frac{5.5 \times 10^8}{21}$ 64-bit integers. Anyway, my solution has failed in challenge phase, so I'm not sure about $c_k \leq 7$ (but almost sure). • » » 10 months ago, # ^ |   +8 They might have consciously increased the time limit to make allow map solutions. The problem already had few ideas to think of: First element will always be even, only possibility being $Q_0$ Now one needed to reduce the $N^3log(N)$ to $N^2log(N)$ In addition the implementation of $map$ also involved care, to avoid memory constrains. Adding another idea of thinking about the range of solutions and an additional data structure would have been a little much. • » » » 10 months ago, # ^ |   +1 Of course I know about it — I just introduced the idea of solution which I thought that is faster :) • » » 10 months ago, # ^ |   +8 Yes, there are much faster solutions and even with the same complexity and general idea as the one with map. For example, you can just sort all $Q_i-Q_j-Q_0$ which are in the range $[2Q_0, 2Q_j]$, remove duplicates and use lower_bound on this sorted vector instead of map [] or find. See my solution in practice, it runs in less than a second and less than 64 MB.I agree that 600 is a nice problem, but I'm pissed off that the memory limit wasn't increased along with the time limit. A map with $2500^2$ elements already has trouble fitting into it (or exceeds it, details). If the basic idea of the problem is allowing whatever with the right idea to pass, the ML should be extra large too. If it's not, then there's no point in increasing the TL, just let people totally fail on example 1, realise map is a stupid idea and try something not stupid.
2020-06-03 07:30:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40504205226898193, "perplexity": 1521.1261124237494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347432237.67/warc/CC-MAIN-20200603050448-20200603080448-00376.warc.gz"}
http://arxiv.wiki/abs/2003.07419
# Polynomial scaling of QAOA for ground-state preparation of the fully-connected p-spin ferromagnet Matteo M. Wauters, Glen Bigan Mbeng, Giuseppe E. Santoro ## Main point Studies r-spin Ising magnet $$(\sum_{i=1}^N \sigma_i^z)^r$$ with external $$\sigma_x$$ magnetic field. Although there are barriers with quantum annealing (very rough energy landscape), the QAOA_p can do well when p is above a critical value $$\approx N/2$$ (because all the minima are degenerate). When p is below the threshold, you have to choose the parameters well to easily find the ground state. ## Results • p=1 QAOA can solve all r Ising magnet with h=0 and N odd (p=2 if N even) • This was known for r=2 Ising magnet (i.e. this is maxcut on a complete graph) • for odd r, $$\beta=\gamma=\pi/4$$ is the answer; for even r, $$\gamma$$ depends on $$\beta$$ Then QAOA (including the variational parameter search) is simulated to estimate the time needed to converge. tags: QAOA
2022-06-25 16:43:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8860393166542053, "perplexity": 4548.225993486238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036077.8/warc/CC-MAIN-20220625160220-20220625190220-00057.warc.gz"}
https://www.gamedev.net/topic/688021-how-to-deal-with-getting-stuck-in-sequential-impulse/
• FEATURED • FEATURED • FEATURED • FEATURED • FEATURED View more View more View more ### Image of the Day Submit IOTD | Top Screenshots ### The latest, straight to your Inbox. Subscribe to GameDev.net Direct to receive the latest updates and exclusive content. # How to deal with getting stuck in sequential impulse? 17 replies to this topic ### #1Finalspace  Members Posted 19 April 2017 - 02:24 AM I have a case when my fixed-rotated player body gets stuck in my level when he teleports into a static geometry or near invalid position, occupying more than half the radius of the player. At first i thought this was a issue with my custom sequential physics system, but this happens in box2d-lite as well. Its hard to explain, so i attached a modified box2d-lite source to show this case. What i want to know, how do i detect such a case, so that i can handle that properly - like trying to teleport the player to a better position, or change the contact normals to random angles Would it work, to check if there is a contact with a extreme high impulse going on? #### Attached Files Edited by Finalspace, 19 April 2017 - 02:29 AM. ### #2Aressera  Members Posted 19 April 2017 - 10:25 AM I think in that case you need to look at the direction of the geometry surface normals to determine which side of the geometry is the front, then you can just push the object out in that direction with split impulse position correction. It's an issue of choosing the correct contact point and normal. ### #3Randy Gaul  Members Posted 19 April 2017 - 11:12 AM Post video of what is going on so we can easily take a look at the behavior. Also please draw contact points and contact normals. ### #4Finalspace  Members Posted 19 April 2017 - 12:09 PM Post video of what is going on so we can easily take a look at the behavior. Also please draw contact points and contact normals. Sure, here it is (Single stepping and resume): Also i modified the source to display the arrows as well ;-) *Edit: Wait a second, this looks exactly like an internal edge issue. This may be solved by using my tile tracing system - converting my tilemap into connected line segments. #### Attached Files Edited by Finalspace, 19 April 2017 - 12:33 PM. ### #5felipefsdev  Members Posted 19 April 2017 - 01:32 PM @FinalSpace It happens with Box2D and Chipmunk because you are setting the position variable directly: b->position.Set(6.0f,  -tileSize * 0.5f); /* Code that I see in your video */ which you shouldn't do (in Box2D or Chipmunk). If, instead, you just set the velocity in Box2D or Chipmunk, it's unlikely that it will happen. It might get stuck in one of the corners, but not... stuck stuck... you can still move. It's unlikely that it happens setting velociy, because when you set the velocity, the engine (Box2D and Chipmunk) will record variables regarding the motion to solve later penetration in solids. Since you are making your own physics engine, I imagine here is where it differs from Box2D and Chipmunk: * Chipmunk and Box2D use **iterative solver**, which makes objects impossible (theoritically) to get stuck forever. The iterative solver will move the object penetrating by a specified amount of units. That's why these two physics engine aren't nice for 2D games trying to simulate the old school platformers or top down. The iterative solver isn't perfect, because it **WILL** let objects penetrate (undesired), and then it will solve little by little the collision, making collisions with walls look like collisions with cushions (and it is a PAIN IN THE ASS to make it look good). So, I don't think you can compare your physics with Box2D/Chipmunk (and I don't recommend you to use them, because by the looks of your game, you don't want tha cushion-like effect). People using Game Maker often uses the following approach, and it's very suitable for non-iterative solvers (which seems to be your case): h_speed = /* My horizontal speed here */; if (overlapping(x + h_speed, y, "wall")) { /* We will collide with this horizontal move */ while (!overlapping(x+sign(h_speed), y, "wall")) { /* We move 1 by 1 px until we reach the limit without collision */ x = x + sign(h_speed); } h_speed = 0; } x = x + h_speed; /* The original speed or 0 if was solved above */ ### #6Randy Gaul  Members Posted 19 April 2017 - 06:03 PM I think you nailed it by mentioning internal edge issue. The internal edges are giving a lot of solutions that probably are not wanted. If we look at 38s we can see the top right corner of the box is getting a couple downward arrows. These seem to prevent the box from popping back up to the surface. Internal edges have lots of solutions. Maybe your raycasting thing can work! If you like we can start talking more about handling internal edges if that is helpful to you. @felipe There seems to be some confusion. The video results are not due to using an iterative vs non-iterative solver. The problems in the video are entirely due to discrete collision detection vs continuous collision detection. In other words, the shape starts penetrating deeper into the geometry and comes across unwanted collisions. The unwanted collisions can cause things to get stuck or fall out of the world. The faster a shape moves the more likely it is to fall into undesired collisions due to the nature of discrete timesteps. ### #7felipefsdev  Members Posted 19 April 2017 - 08:48 PM Oh, I missed the edit that he solved the problem! By his post, I totally thought he was using his physics engine and then tested with Box2D later, that's why I explained about the non-iterative solver and iterative solver. ### #8Finalspace  Members Posted 19 April 2017 - 11:05 PM Just to clarify, this happened in my custom sequential impulse physics engine (which is identical to box2d-lite in terms of functionality) when i approach the static geometry too fast as well (hard to reproduce but happened a few times while testing). Half of the radius gets inside the static geometry and then the internal edge contacts pushes it into the geometry, so that i wont come out ever. Impulses at that point just goes crazy and builds up energy until a given point (Warmstarting and multiple Iterations keeps them from exploding). One solution i will try is to convert my tilemap (I love the concept of tilemaps, its simple and can be easiely changed) into connected line segments which i can produce using a contour tracing algorythm, which i already have implemented in the past successfully ;-) This should solve may internal edges problems for the most part. Also i should just clamp the velocity of the bodies so that i can never move too fast, so that bodies wont pass through thin line segments... But i would love to hear what other solutions there exists. I heard about conservative advancement, but after looking at the paper i still dont get it - except for the fact thats a "search in time" algorythmn, so its basically a while loop until some threshold is met. Not sure how i would apply that method to the existing contact solving. Anyway i will now port my javascript source to C++ and see how is it going. Edited by Finalspace, 20 April 2017 - 12:14 AM. ### #9Randy Gaul  Members Posted 20 April 2017 - 11:05 AM Making line segments is a good solution since you're using a tile map, along with a velocity clamp. Another solution is to find TOI between the player and the world. The TOI can be found with conservative advancement (CA). The idea of is to call GJK to get a distance between two shapes. Then use relative velocity to form a guaranteed "safe" step, as in a step forward in time that shouldn't result in inter-penetration. Erin Catto GDC 2013 has some slides on this IIRC. Keep moving the shapes closer together and keep calling GJK to see how far apart they are. Once they are "close enough" consider that a contact, and then make some contact points (somehow). The nice thing about CA is it can handle rotating objects. If the shape cannot rotate it gets even simpler, for example for many player controllers just using some hacky raycasts, or even doing iterative bisection to find TOI should work perfectly fine. ### #10Finalspace  Members Posted 20 April 2017 - 01:57 PM First step is complete: Ported my javascript tile tracer to c++: ### #11Finalspace  Members Posted 23 April 2017 - 11:51 AM So i am back: The implementation works beatifully, i get perfect line segments without any internal edges due to connected edges, but unfortunatly the internal edge issues are still present for vertex vs vertex contacts, when trying to fall off - see the following video demonstrating the progress and the issue: Should i just drop any vertex vs vertex contact entirely - detected by some distance threshold + dot product??? Edited by Finalspace, 23 April 2017 - 12:38 PM. ### #12Randy Gaul  Members Posted 23 April 2017 - 03:36 PM Oh definitely drop vertex to vertex cases. These can be handled implicitly by face to face! I imagine if you drop vertex to vertex your solution will work pretty well. ### #13Finalspace  Members Posted 25 April 2017 - 05:30 AM Is this a legitimate way to solve that?         // @NOTE: Distance of penetration allowed in the solver constant r32 PHYSICS_ALLOWED_PENETRATION_DISTANCE = 0.01f; // @NOTE: Used for merging two contacts together when too close constant r32 PHYSICS_CONTACT_MERGE_THRESHOLD_DISTANCE = 0.015f; StaticAssert(PHYSICS_CONTACT_MERGE_THRESHOLD_DISTANCE > PHYSICS_ALLOWED_PENETRATION_DISTANCE); // // Merge two points into one when too close together // if (output.count > 1) { Vec2f tangent = Vec2Cross(output.normal, 1.0f); Vec2f distance = output.points[1] - output.points[0]; r32 projDistance = Vec2Dot(distance, tangent); if (projDistance < PHYSICS_CONTACT_MERGE_THRESHOLD_DISTANCE) { output.points[0] = Vec2Lerp(output.points[0], 0.5f, output.points[1]); --output.count; } } // // Drop vertex vs vertex case to solve internal edge issues // if (output.count == 1) { output.count = 0; } It works, but i am not sure if this is right - especially the part when i merge two contacts into one due to clipping! Edit*: Forget it, i am just stupid... dropping any vertex contact is just silly. I need to take the contact distance into account as well! Edited by Finalspace, 25 April 2017 - 05:44 AM. ### #14Finalspace  Members Posted 26 April 2017 - 01:34 AM Hmm, i still havent solved the issue... neither detected if i am face vs face or face vs edge or edge vs edge. My contacts are the results of clipping the incident face segment against the reference side planes which produces two contact points in my internal edge case for 99%, because both contacts are behind the incident face plane -> So its a face vs face case :-( I really need one quiet evening, without any kids jumping around... so i can focus on that. ### #15Randy Gaul  Members Posted 27 April 2017 - 11:12 AM Yeah you definitely can't drop cases with one contact. I just realized you must be treating the individual line segments as separate colliders. So what happens in the video is you do get a face to face contact, but clipping gives you one or two points. If the two point case, you have some merging code so it will probably be merged to a single point that looks like a vertex to vertex collision. The problem becomes: how to treat the corner where to line segments meet end to end as a new and unique voronoi region. In your video each corner voronoi region would look like 1/4th of a circle, and normals can point outward away from the corner (pointing to the obtuse angle where each line segment meets). This is what Box2D does for its chain shape. A chain is a set of end to end connected line segments, and the narrow phase actually treats the connected endpoints as voronoi regions. You can check out the chain shape collision detection for an example implementation. --- But since you're using only AABBs maybe a more hacky solution can work. Another more hacky solution might be to fiddle around and add in a logic layer. Take the resulting manifold after merging close contact points. Then have some logic to see if the player is hitting a wall on the side (which would look like 2 contact points sharing the same normal). Then remove all other *singular* contact points that do not share the same normal, but keep other cases of hitting another wall (2 contact points with same normal). Something very hacky like this may work, and it might be easier than doing a chain shape. The chain shape is pretty much collision detection on a mesh with adjacency information, but in 2D. Edited by Randy Gaul, 27 April 2017 - 11:13 AM. ### #16Finalspace  Members Posted 27 April 2017 - 10:56 PM Yeah you definitely can't drop cases with one contact. I just realized you must be treating the individual line segments as separate colliders. So what happens in the video is you do get a face to face contact, but clipping gives you one or two points. If the two point case, you have some merging code so it will probably be merged to a single point that looks like a vertex to vertex collision. The problem becomes: how to treat the corner where to line segments meet end to end as a new and unique voronoi region. In your video each corner voronoi region would look like 1/4th of a circle, and normals can point outward away from the corner (pointing to the obtuse angle where each line segment meets). This is what Box2D does for its chain shape. A chain is a set of end to end connected line segments, and the narrow phase actually treats the connected endpoints as voronoi regions. You can check out the chain shape collision detection for an example implementation. --- But since you're using only AABBs maybe a more hacky solution can work. Another more hacky solution might be to fiddle around and add in a logic layer. Take the resulting manifold after merging close contact points. Then have some logic to see if the player is hitting a wall on the side (which would look like 2 contact points sharing the same normal). Then remove all other *singular* contact points that do not share the same normal, but keep other cases of hitting another wall (2 contact points with same normal). Something very hacky like this may work, and it might be easier than doing a chain shape. The chain shape is pretty much collision detection on a mesh with adjacency information, but in 2D. This is the next thing i wanted to talk about: But first i must clarify: I have a general physics solution including multiple shapes (planes, line segments, circles, polygons, boxs) and rotation dynamics support - i am not bound to aabb only. The video looks like it, because the player have a inverse inertia of zero - so it never rotates, but the internal is full rigid body - basically a extended version of box2d-lite full custom implemented - except for solving this is almost identical.. To get to the topic, you are totally right! Currently i treat line segments the same case i do polygon vs polygon - but reduced the line segment to a single edge using SAT. This is totally wrong and the reason why i made a separate line segment vs polygon case in my original javascript physics system in the first place - which i dont ported over to C for my current system. Thinking smart like this: "Why do i made a separate case for line segment? No need its just a simple SAT call + the same code for vertex-based vs vertex-based shape" was not a good choice for this case. So i will create a separate contact generator for line segment vs vertex-based by looking at the voronoi region only, so the normal is based on that region and not based on SAT - then everything should work just fine? Also i will have a look at box2d source and see how that chain shape works, because its basically the thing i want... i dont want one line segment for each tile edge, but rather connected line segments -> The tile tracing returns just a array with a array of connected vertices anyway. So this would be much better i think. Edited by Finalspace, 27 April 2017 - 11:06 PM. ### #17Finalspace  Members Posted 28 April 2017 - 04:52 AM Which normal is correct for a chained line segment? https://jsfiddle.net/6ekLfe9n/ - Locked surface normal? <- I assume this?? - Locked positive region? - No lock at all? Btw. i found the code for edge vs polygon in box2d: b2EPCollider::Collide()... Looks really complicated... Edited by Finalspace, 28 April 2017 - 05:34 AM. ### #18Randy Gaul  Members Posted 29 April 2017 - 11:20 AM Depends on what you want. With connected line segments the normal would be between the angle the two segments form, so something closer to the locked one is probably what you are after. Edited by Randy Gaul, 29 April 2017 - 11:20 AM.
2017-05-27 21:21:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2063359022140503, "perplexity": 1387.965348134303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609061.61/warc/CC-MAIN-20170527210417-20170527230417-00133.warc.gz"}
https://themillennialmirror.com/trends/question-the-mathematical-fact-that-78-87-is-known-as-what/
# Question: The mathematical fact that 7+8 = 8+7 is known as what? Answer: The commutative law is a basic property used in mathematics. It states that the order in which you add or multiply two real numbers does not affect the result. Stated symbolically when we add: a + b = b + a and when we multiply: a × b = b × a. From these laws it follows that any finite sum or product is unaltered by reordering its terms or factors. The word commutative is a combination of the French word commuter meaning “to substitute or switch” and the suffix -ative meaning “tending to” so the word literally means “tending to substitute or switch.” https://themillennialmirror.com/trends/the-mathematical-facts-that-78-87-is-known-as-what/ ### Top 10 Results 1The mathematical fact that 7+8 = 8+7 is known as Commutative law The word commutative is a combination of the French word commuter meaning “to substitute or switch” and the suffix -ative meaning “tending to” so the word literally means “tending to substitute or switch.” Step 2 : Answer to the question “The mathematical fact that 7+8 = 8+7 is known as what?” Commutative law: 2The mathematical fact that 7+8 = 8+7 is known as Commutative law The mathematical fact that 7+8 = 8+7 is known as what? Answer: Commutative law. The commutative law is a basic property used in mathematics. It states that the order in which you add or multiply two real numbers does not affect the result. Stated symbolically when we add: … https://www.trivia.net/the-mathematical-fact-that-7-8-8-7-is-known-as-what 3Commutative Property Distributive Law. The “Distributive Law” is the BEST one of all, but needs careful attention. This is what it lets us do: 3 lots of (2+4) is the same as 3 lots of 2 plus 3 lots of 4. So, the 3× can be “distributed” across the 2+4, into 3×2 and 3×4. And we write it like this: https://www.mathsisfun.com/associative-commutative-distributive.html 4The mathematical fact that 7+8 = 8+7 is known as Commutative law 3rd Grade Math Lesson 44: Using the Commutative Property to Find Known Facts of 6, 7, 8, and 9 Homework https://www.teachertube.com/videos/lesson-44-using-the-commutative-property-to-find-known-facts-of-6-7-8-and-9-467211 Addition (usually signified by the plus symbol +) is one of the four basic operations of arithmetic, the other three being subtraction, multiplication and division.The addition of two whole numbers results in the total amount or sum of those values combined. The example in the adjacent picture shows a combination of three apples and two apples, making a total of five apples. 6The mathematical fact that 7+8 = 8+7 is known as Commutative law When an amount grows at a fixed percent per unit time, the growth is exponential. To find (A_0) we use the fact that (A_0) is the amount at time zero, so (A_0=10). To find (k), use the fact that after one hour ((t=1)) the population doubles from (10) to (20).The formula is derived as follows https://math.libretexts.org/Bookshelves/Precalculus/Book%3A_Precalculus_(OpenStax)/04%3A_Exponential_and_Logarithmic_Functions/4.08%3A_Exponential_and_Logarithmic_Models 7The mathematical fact that 7+8 = 8+7 is known as Commutative law Confident doublers will appreciate that multiplying by eight can be done by doubling, doubling and doubling again. If children know that multiplication is commutative, they can turn around most of the seven and eight times table facts – 7 x 5 becomes 5 x 7. Only three facts are then not covered in the other tables – 7 x 7, 8 x 8, and 7 x 8. https://www.teachprimary.com/learning_resources/view/ks1-ks2-maths-multiplication-facts 8The mathematical fact that 7+8 = 8+7 is known as Commutative law Commutative Property of Addition. If we add two whole numbers with different orders, then the sum will remain same. A + B = B + A Let us see some examples. Example 1. Check if 7 + 8 = 8 + 7. Solution. 7 + 8 = 15 and 8 + 7 = 15 Thus, 7 + 8 is equal to 8 + 7. Example 2. Check if 12 + 13 = 13 + 12. Solution. 12 + 13 = 25 and 13 + 12 = 25 https://letsplaymaths.com/Class-6-Whole-Number.html 9The mathematical fact that 7+8 = 8+7 is known as Commutative law to ten to calculate other math facts and can extend this to multiples of ten in later grades. for 8 + 5 think 8 + 2 + 3 is 10 + 3 or 13 3 Addition Subtraction Compensation Using other known math facts and compensating. For example, adding 2 to an addend and taking 2 away from the sum. for 25 + 33 think 25 + 35 -2 is 60 -2 or 58 3 Addition … https://www.edu.gov.mb.ca/k12/cur/math/mm_gr8/strategies.pdf 10The mathematical fact that 7+8 = 8+7 is known as Commutative law CCSS.Math.Content.1.OA.B.3 Apply properties of operations as strategies to add and subtract.2 Examples: If 8 + 3 = 11 is known, then 3 + 8 = 11 is also known. (Commutative property of addition.) To add 2 + 6 + 4, the second two numbers can be added to make a ten, so 2 + 6 + 4 = 2 + 10 = 12. (Associative property of addition.) ### Wikipedia Search Results 1.  List of mathematical symbols The following is a list of mathematical symbols used in all branches of mathematics to express a formula or to represent a constant. A mathematical concept… https://en.wikipedia.org/wiki/List of mathematical symbols 2.  Space (mathematics) precisely definable as the cultural artifact of mathematics itself. For more information on mathematical structures see Wikipedia: mathematical structure, equivalent… https://en.wikipedia.org/wiki/Space (mathematics) 3.  Parity (mathematics) equality is likely to be correct by testing the parity of each side. As with ordinary arithmetic, multiplication and addition are commutative and associative… https://en.wikipedia.org/wiki/Parity (mathematics)
2020-10-30 22:51:07
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9503164887428284, "perplexity": 491.5700941614386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107911792.65/warc/CC-MAIN-20201030212708-20201031002708-00700.warc.gz"}
https://www.experts-exchange.com/questions/26444113/open-folder-in-powershell.html
Solved # open folder in powershell Posted on 2010-09-01 Medium Priority 1,454 Views hi, id like a powershell script to open a location eg when run the folder c:\data\test\%hostname%\tools    opens  much like if there was a shortcut to it power shell preffered VB ok 0 Question by:mhamer [X] ###### Welcome to Experts Exchange Add your voice to the tech community where 5M+ people just like you are talking about what matters. • Help others & share knowledge • Earn cash & points • 4 • 2 LVL 42 Expert Comment ID: 33576449 use Invoke-item:invoke-item c:\data\test\%hostname%\tools 0 LVL 42 Accepted Solution sedgwick earned 2000 total points ID: 33576477 to get %hostname% (envirnemtn variable) use the following:$hostname =$Env:hostnameinvoke-item c:\data\test\%hostname%\tools 0 Author Comment ID: 33576597 Great works perfectly thank you what would be the vb equivilant?  just out of interest?  you get the points anyway 0 LVL 42 Expert Comment ID: 33576703 in vb script: Set objShell = CreateObject("WScript.Shell") hostname = objShell.ExpandEnvironmentStrings("%hostname%") objShell.Run "explorer c:\data\test\" & hostname & "\tools 0 Author Comment ID: 33577013 unterminated string constant  when i use that 0 LVL 42 Expert Comment ID: 33577119 i forgot the " in line 3: Set objShell = CreateObject("WScript.Shell") hostname = objShell.ExpandEnvironmentStrings("%hostname%") objShell.Run "explorer c:\data\test\" & hostname & "\tools" 0 ## Featured Post Question has a verified solution. If you are experiencing a similar issue, please ask a related question The following article is intended as a guide to using PowerShell as a more versatile and reliable form of application detection in SCCM. In this post we will be converting StringData saved within a text file into a hash table. This can be further used in a PowerShell script for replacing settings that are dynamic in nature from environment to environment. Get people started with the process of using Access VBA to control Excel using automation, Microsoft Access can control other applications. An example is the ability to programmatically talk to Excel. Using automation, an Access application can laun… Show developers how to use a criteria form to limit the data that appears on an Access report. It is a common requirement that users can specify the criteria for a report at runtime. The easiest way to accomplish this is using a criteria form that a… ###### Suggested Courses Course of the Month12 days, 17 hours left to enroll
2017-08-19 13:36:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3578832149505615, "perplexity": 13264.153229864341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105451.99/warc/CC-MAIN-20170819124333-20170819144333-00382.warc.gz"}
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=tm&paperid=1880&option_lang=eng
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Forthcoming papers Archive Impact factor Guidelines for authors License agreement Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Trudy MIAN: Year: Volume: Issue: Page: Find Tr. Mat. Inst. Steklova, 2009, Volume 266, Pages 149–183 (Mi tm1880) The van Kampen Obstruction and Its Relatives S. A. Melikhov Steklov Mathematical Institute, Russian Academy of Sciences, Moscow, Russia Abstract: We review a cochain-free treatment of the classical van Kampen obstruction $\vartheta$ to embeddability of an $n$-polyhedron in $\mathbb R^{2n}$ and consider several analogs and generalizations of $\vartheta$, including an extraordinary lift of $\vartheta$, which has been studied by J.-P. Dax in the manifold case. The following results are obtained: (1) The $\mod2$ reduction of $\vartheta$ is incomplete, which answers a question of Sarkaria. (2) An odd-dimensional analog of $\vartheta$ is a complete obstruction to linkless embeddability ($=$“intrinsic unlinking”) of a given $n$-polyhedron in $\mathbb R^{2n+1}$. (3) A “blown-up” one-parameter version of $\vartheta$ is a universal type 1 invariant of singular knots, i.e., knots in $\mathbb R^3$ with a finite number of rigid transverse double points. We use it to decide in simple homological terms when a given integer-valued type 1 invariant of singular knots admits an integral arrow diagram ($=$Polyak–Viro) formula. (4) Settling a problem of Yashchenko in the metastable range, we find that every PL manifold $N$ nonembeddable in a given $\mathbb R^m$, $m\ge\frac{3(n+1)}2$, contains a subset $X$ such that no map $N\to\mathbb R^m$ sends $X$ and $N\setminus X$ to disjoint sets. (5) We elaborate on McCrory's analysis of the Zeeman spectral sequence to geometrically characterize “$k$-co-connected and locally $k$-co-connected” polyhedra, which we embed in $\mathbb R^{2n-k}$ for $k<\frac{n-3}2$, thus extending the Penrose–Whitehead–Zeeman theorem. Full text: PDF file (508 kB) References: PDF file   HTML file English version: Proceedings of the Steklov Institute of Mathematics, 2009, 266, 142–176 Bibliographic databases: UDC: 515.164.6+515.162.8+515.148 Received in May 2009 Language: Citation: S. A. Melikhov, “The van Kampen Obstruction and Its Relatives”, Geometry, topology, and mathematical physics. II, Collected papers. Dedicated to Academician Sergei Petrovich Novikov on the occasion of his 70th birthday, Tr. Mat. Inst. Steklova, 266, MAIK Nauka/Interperiodica, Moscow, 2009, 149–183; Proc. Steklov Inst. Math., 266 (2009), 142–176 Citation in format AMSBIB \Bibitem{Mel09} \by S.~A.~Melikhov \paper The van Kampen Obstruction and Its Relatives \inbook Geometry, topology, and mathematical physics.~II \bookinfo Collected papers. Dedicated to Academician Sergei Petrovich Novikov on the occasion of his 70th birthday \serial Tr. Mat. Inst. Steklova \yr 2009 \vol 266 \pages 149--183 \publ MAIK Nauka/Interperiodica \mathnet{http://mi.mathnet.ru/tm1880} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=2603266} \zmath{https://zbmath.org/?q=an:1196.57019} \elib{http://elibrary.ru/item.asp?id=12901683} \transl \jour Proc. Steklov Inst. Math. \yr 2009 \vol 266 \pages 142--176 \crossref{https://doi.org/10.1134/S0081543809030092} \scopus{http://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-70350349674} • http://mi.mathnet.ru/eng/tm1880 • http://mi.mathnet.ru/eng/tm/v266/p149 SHARE: Citing articles on Google Scholar: Russian citations, English citations Related articles on Google Scholar: Russian articles, English articles This publication is cited in the following articles: 1. Matoušek J., Tancer M., Wagner U., “Hardness of embedding simplicial complexes in $\mathbb R^d$”, J. Eur. Math. Soc. (JEMS), 13:2 (2011), 259–295 2. Wagner U., “Minors in random and expanding hypergraphs”, Computational Geometry (SCG 11), 2011, 351–360 3. Freedman M., Krushkal V., “Geometric Complexity of Embeddings in R-D”, Geom. Funct. Anal., 24:5 (2014), 1406–1430 4. Goncalves D., Skopenkov A., “a Useful Lemma on Equivariant Maps”, Homol. Homotopy Appl., 16:2 (2014), 307–309 5. S. A. Melikhov, “Transverse fundamental group and projected embeddings”, Proc. Steklov Inst. Math., 290:1 (2015), 155–165 6. Oleg R. Musin, Alexey Yu. Volovikov, “Borsuk–Ulam type spaces”, Mosc. Math. J., 15:4 (2015), 749–766 7. A. B. Skopenkov, “A user's guide to the topological Tverberg conjecture”, Russian Math. Surveys, 73:2 (2018), 323–353 • Number of views: This page: 186 Full text: 34 References: 53
2020-02-21 13:36:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41016867756843567, "perplexity": 7414.547728099941}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145529.37/warc/CC-MAIN-20200221111140-20200221141140-00116.warc.gz"}
https://www.beatthegmat.com/700-gmat-score-indian-it-male-engineer-makes-it-to-isb-t274398.html
• 5 Day FREE Trial Study Smarter, Not Harder Available with Beat the GMAT members only code • FREE GMAT Exam Know how you'd score today for $0 Available with Beat the GMAT members only code • 5-Day Free Trial 5-day free, full-access trial TTP Quant Available with Beat the GMAT members only code • Get 300+ Practice Questions 25 Video lessons and 6 Webinars for FREE Available with Beat the GMAT members only code • Free Trial & Practice Exam BEAT THE GMAT EXCLUSIVE Available with Beat the GMAT members only code • Award-winning private GMAT tutoring Register now and save up to$200 Available with Beat the GMAT members only code • Free Practice Test & Review How would you score if you took the GMAT Available with Beat the GMAT members only code • Free Veritas GMAT Class Experience Lesson 1 Live Free Available with Beat the GMAT members only code • Magoosh Study with Magoosh GMAT prep Available with Beat the GMAT members only code • 1 Hour Free BEAT THE GMAT EXCLUSIVE Available with Beat the GMAT members only code ## 700 GMAT Score, Indian, IT, Male Engineer makes it to ISB This topic has 0 member replies dhonu121 Master | Next Rank: 500 Posts Joined 21 Aug 2011 Posted: 316 messages Followed by: 6 members 16 Test Date: 24th September Target GMAT Score: 700+ #### 700 GMAT Score, Indian, IT, Male Engineer makes it to ISB Mon Feb 24, 2014 11:01 am This was my third attempt at ISB and after so much polishing it was inevitable that I would make it to ISB. I am so glad and satisfied now that I all my hardwork has finally paid off. It started with a dream in the year 2011 when I started preparing for GMAT and I thought that I would finish 2011 with an ADMIT from ISB. However, fate had some other plans for me and I could not make it. Undaunted by the outcome, I persevered and kept up the momentum. I retook GMAT in 2012 and reapplied. However, I still got rejected. Situation seemed so tough and I thought that I would never be able to make it to ISB. However, as they say, that don't give up. I reapplied in 2013 in R2 and this time I got the admission offer to ISB class of 2015. All I would like to say to aspirants is that never never give up. Keep polishing yourself and keep working on yourself. Your hardwork will pay-off one day. In case you need any advice on any aspect of your application, feel free to contact me. All the best. _________________ If you've liked my post, let me know by pressing the thanks button. ### Top First Responders* 1 GMATGuruNY 71 first replies 2 Rich.C@EMPOWERgma... 48 first replies 3 Brent@GMATPrepNow 37 first replies 4 Jay@ManhattanReview 26 first replies 5 ErikaPrepScholar 10 first replies * Only counts replies to topics started in last 30 days See More Top Beat The GMAT Members ### Most Active Experts 1 GMATGuruNY The Princeton Review Teacher 129 posts 2 Rich.C@EMPOWERgma... EMPOWERgmat 116 posts 3 Jeff@TargetTestPrep Target Test Prep 106 posts 4 Max@Math Revolution Math Revolution 92 posts 5 Scott@TargetTestPrep Target Test Prep 92 posts See More Top Beat The GMAT Experts
2018-04-20 00:52:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2530159652233124, "perplexity": 14286.599641128108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937090.0/warc/CC-MAIN-20180420003432-20180420023432-00259.warc.gz"}
https://codereview.stackexchange.com/questions/226383/insert-elements-into-a-map-container
# Insert elements into a map container I am inserting elements into the map container. For this I am using the same statements multiple times in the below function. Is there any way I can write a generic function for this? So that even at the later point of time, when required, I can insert new elements. bool EMRMgr::GetParams() { EmIOStruct emIOstructObj; WCHAR szValue[MAX_PATH] = { 0 }; DWORD dwDataSize = sizeof(szValue) / sizeof(WCHAR); long lRes = 0; McStorageType mcStorageTypeObj = McStorageType::eRegistry; std::wstring value; // First time I am using the below statements to insert elements into the map container mcIOstructObj.lpszValueName = (LPWSTR)ER_ID; memset(szValue, 0, MAX_PATH); mcIOstructObj.lpData = (LPBYTE)&szValue[0]; value.clear(); if ((LPWSTR)mcIOstructObj.lpData == nullptr) { value.assign(L""); } else { value.assign((LPWSTR)mcIOstructObj.lpData); } m_fileParams.insert({ (std::wstring) ER_ID, value }); // Second time I am using the below statements to insert elements into the map container mcIOstructObj.lpszValueName = (LPWSTR)CPS; memset(szValue, 0, MAX_PATH); mcIOstructObj.lpData = (LPBYTE)&szValue[0]; value.clear(); if ((LPWSTR)mcIOstructObj.lpData == nullptr) { value.assign(L""); } else { value.assign((LPWSTR)mcIOstructObj.lpData); } m_fileParams.insert({ (std::wstring) CPS, value }); return true; } • Code Review is a place to review implemented, working code. As it currently stands, your question appears to indicate that you are seeking help for not yet implemented code, which is off-topic for Code Review. – L. F. Aug 18 at 18:01 • At least to me, it looks like this is code that is clumsy to use, but does function. Unless I'm missing something, its a fine candidate for code review. – Jerry Coffin Aug 18 at 22:13 • Please review the code now. – John Paul Coder Aug 20 at 5:10 • I have rolled back your changes. Once answers are made, you should not change the question in a way to invalidate any answer. – dfhwze Aug 20 at 5:30 There is missing code so I'm assuming that ER_ID and CPS are strings. If not, you can easily sub the data type. Since the only difference between the blocks of code is which string (or other data type) to use (ER_ID vs. CPS), you can make a member function of EMRMgr that has the same block of code but subbing the string to use with a parameter. Unless I've overlooked something, this should work. bool EMRMgr::DoTheThing(std::string str) { mcIOstructObj.lpszValueName = (LPWSTR)str; memset(szValue, 0, MAX_PATH); mcIOstructObj.lpData = (LPBYTE)&szValue[0]; value.clear(); if ((LPWSTR)mcIOstructObj.lpData == nullptr) { value.assign(L""); } else { value.assign((LPWSTR)mcIOstructObj.lpData); } m_fileParams.insert({ (std::wstring) str, value }); }
2019-10-22 22:59:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2127009630203247, "perplexity": 3354.350797507854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987824701.89/warc/CC-MAIN-20191022205851-20191022233351-00308.warc.gz"}
https://www.vedantu.com/question-answer/if-sqrt-2-sec-x-+-tan-x-1-then-the-value-of-x-is-class-11-maths-cbse-5fd741fb147a833c29ece1c1
# If $\sqrt 2 \sec x + \tan x = 1$, then the value of $x$ isA.$2n\pi + \dfrac{\pi }{3}$B.$2n\pi - \dfrac{\pi }{4}$C.$2n\pi + \dfrac{\pi }{6}$D.$2n\pi + \dfrac{\pi }{{12}}$ Verified 198.9k+ views Hint: Here we will use the trigonometric properties. First, we will convert the given equation in terms of the sin and cos function and simplify it. Then we will put the value of the constants in the equation in terms of the trigonometric function. We will them simplify the equation to get the value of $x$. Given equation is $\sqrt 2 \sec x + \tan x = 1$. We will write the given equation in terms of the sine and cosine functions. We know that secant function is equal to the reciprocal of the cos function and tangent function is equal to the ratio of the sine to the cosine function. Therefore, we get $\Rightarrow \sqrt 2 \dfrac{1}{{\cos x}} + \dfrac{{\sin x}}{{\cos x}} = 1$ Now we will take the cos function common in the denominator and take it to the other side of the equation, we get $\Rightarrow \dfrac{{\sqrt 2 + \sin x}}{{\cos x}} = 1$ $\Rightarrow \sqrt 2 + \sin x = \cos x$ Now we will simplify the above equation, we get $\Rightarrow \cos x - \sin x = \sqrt 2$ Dividing both sides by $\sqrt 2$, we get $\Rightarrow \dfrac{1}{{\sqrt 2 }}\cos x - \dfrac{1}{{\sqrt 2 }}\sin x = 1$ Now we will put the value of the constants in the equation in terms of the trigonometric functions. We know that $\sin \dfrac{\pi }{4} = \cos \dfrac{\pi }{4} = \dfrac{1}{{\sqrt 2 }}$ and $\cos 2n\pi = 1$. Now by putting these values in the above equation, we get $\Rightarrow \cos \dfrac{\pi }{4}\cos x - \sin \dfrac{\pi }{4}\sin x = \cos 2n\pi$ Now using the property of the trigonometry $\cos \left( {A + B} \right) = \cos A\cos B - \sin A\sin B$, we get $\Rightarrow \cos \left( {\dfrac{\pi }{4} + x} \right) = \cos 2n\pi$ Now by cancelling the cos function from both sides of the equation, we get $\Rightarrow \dfrac{\pi }{4} + x = 2n\pi$ Now by solving this we will get the value of $x$. Subtracting $\dfrac{\pi }{4}$ from both the sides, we get $\Rightarrow x = 2n\pi - \dfrac{\pi }{4}$ Hence the value of $x$ is $2n\pi - \dfrac{\pi }{4}$. So, option B is the correct option. Note: We know that there are six basic trigonometric functions and they are sine, cosine, tangent, cosecant, secant and cotangent. Also, cosecant, secant and cotangent are reciprocal functions of sine, cosine and tangent respectively. While solving trigonometric equations we should always write the trigonometric functions in terms of sine and cosine function. This makes it easier for us to solve the given equation. Every trigonometric function is a periodic function that means that they repeat their value after a certain interval. These intervals are the multiples of $2\pi$.
2022-10-05 09:38:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9692224264144897, "perplexity": 91.29743875181377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00388.warc.gz"}
https://truongtx.me/2018/05/01/with-html-canvas-draw-2d-image-on-cylinder-surface
# Overview Long time ago, I worked one a project which allowed users to select images on a 2D canvas and then draw that image on a cylinder surface (the mug in this case). I had to Google for the suitable libraries but I couldn’t find any. I also asked a question on stackoverflow but the answer did not satisfy me. The reason is that it demonstrates how to stretch the image, not how to bend the image (which is what shown in the picture above). Because of that, I decided to implement it by myself and turned out that it was not as hard as I thought before. Everything is just a loop of basic mathematic formula that I had been taught in (Vietnamese) high school. # The idea It’s best to demonstrate these geometrical idea using an image. Below is the image that show the view of the mug from the upside down. Since the mug is a cylinder object, if we view it from the upside down, we will get a circle like in the picture. • O: the center of the circle • the red arrows: user’s eyes, view direction to the mug • AB line segment: diameter of the circle • AB curve: demonstrates the image that will be drawn on the real mug (original image). Since the red arrows illustrate the view direction, we will see half of the mug (the AB curve) • x0xn: the image that is displayed on the computer screen (reflected image), the image that reflects the one drawn on the mug. Its width is the same with AB line segment. The basic idea is that we will loop line by line, from the upside down, 1px each line. The current line will be represented just like the idea image above. For each line, continue to loop each column (each pixel), from left to right (loop x from x0 to xn). Within each loop, project the current pixel to the circle (the AB curve, the image that is drawn on the real mug), calculate the corresponding pixel on that original image, take it back and draw on the reflected image. For each loop, we have • x: the current (width) value • M: projection image of x on AB line segment • N: projection image of x on the circle • aa: the angle between ON and OM The length of AN curve can be calculated using the aa angle. The aa angle can be calculated using the right triangle (with OM and ON) After finishing all the loops, we will get the output reflected image that is the bended image. The final task is to take that image and draw on the real image of the mug. Actually, the above is just the initial idea. It is not the most optimized way to implement that since there will be many repetitive calculation tasks. I will not follow this precisely in the inplementation (I will explain later). Also, this can be used for side view direction only (as is demonstrated on the image). To make the image bended in vertical direction (to fit with the real mug image with the view direction from vantage point), we can modify the formula a bit. However, for simplicity, I only apply a small hack to bend the image vertically after transforming it. To achieve this, first we need to find the equation of AOB parabola. After that, simply slice the left image vertically into 1px wide slices, translate all the slice based on the coordination of the corresponding point on the parabola. This idea can be implemented using pixel manipulation API from HTML Canvas. If you are not used to pixel manipulation, read this post from Mozilla Pixel Manipulation with Canvas. You can also read about Image Blending using HTML Canvas, which is another simpler example using the pixel manipulation API that I have made before. # Prepare the Images and Information First, we need to prepare original image. This image should be in the same width/height ratio with the mug, which means the width of the image should be the same with the mug’s diameter and its height should be the same with the height of the mug. Note: you may notice that for left and right view of the final mug image (in the first image in this post), the image is not fully drawn on mug, which means the original image’s width is not the same with the mug’s perimeter. To make it easy, just add some transparent region on the left and the right of the original image to make it width the same with the mug’s perimeter (you can check on the original image above). I will demonstrate it with the middle view of the canvas, which means we will need to crop this image (in width) from the position 1/4 to 3/4. Now retrive the image using Javascript # Prepare the Canvas for Cropped Original and Reflected Images We need 2 canvases for holding the information of the original image and the reflected image. From now on, when I refer to original image, it means the cropped one (1/4 to 3/4). The function for creating canvas that holds the original image and all its stuff (context, pixel data,…) The function for creating canvas that holds the reflected image and all its stuff (context, pixel data,…) Call the 2 functions # Start the loop… I will show the idea image here again for you to easy to follow Before the loop, we need to define some variables (based on the above image) Magic happens now # Draw the pixel data and Get the output image Now, you need to draw all the pixel data that you have generated on the reflected canvas and let it export the PNG image What you get is something similar to this # Optimize it a bit The above solution works fine. However, there are a lot of repetitive calculation tasks in the loop. For each column (x), even the y changes but the x of the project pixel will remain the same, so looping line by line is not a good idea. Instead, the above loop can be changed to loop x first, calculate the corresponding x position of the original image, and then loop y to take all the pixel of that column. # Make it a bit more realistic (actually just a fake) Let me repeat that, to transform from the left to the right image, simply slice the left image vertically, 1px for each slice. After that, translate those slices based on the AOB parabola. However, we don’t need to wait until the final reflected image is generated to do the translation. We will modify the loop to translate it directly in each column. The AOB parabola can be described as • y = ax2 + bx + c Since the coordination of A is (0,0), the equation can be reduced to • y = ax2 + bx. Let the coordination of O is (x1, y1) and B is (x2, y2), we can calculate the value of a and b like this (don’t ask me where I get this, you can do this by yourself :D ) • b = (y2x12 - y1x22) / (x2x12 - x1x22) • a = (y1 - bx1) / (x12) Define those variables first, before the loop Next, come back to the createReflectedCanvas function, modify the reflected canvas’ height Inside the each x loop, before starting the y loop, calculate the current Y offset (using the parabola equation), translate it by Y offset when draw on the reflected canvas. The final loop will look like this And the final images that you get will be something similar to this # Combine with the real image The final step is to draw the reflected image that we have generated on a new canvas that contains the image of the mug. You will get these images
2018-05-26 13:30:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5503498911857605, "perplexity": 841.4015865388417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867417.75/warc/CC-MAIN-20180526131802-20180526151802-00518.warc.gz"}
https://byjus.com/question-answer/which-of-the-following-equation-does-not-represent-a-simple-harmonic-motion-y-a-sin/
Question # Which of the following equation does not represent a simple harmonic motion: A y=asinωt B y=acosωt C y=asinωt+bcosωt D y=atanωt Solution ## The correct option is D $$y= a\tan\omega t$$Standard equation of $$S.H.M$$     $$\dfrac {d^2 y}{dt^2}=-\omega^2 y$$, is not satisfied by $$y=a\tan \omega t$$Physics Suggest Corrections 0 Similar questions View More People also searched for View More
2022-01-16 23:06:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7246599197387695, "perplexity": 7589.075269000504}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300244.42/warc/CC-MAIN-20220116210734-20220117000734-00690.warc.gz"}
https://www.physicsforums.com/threads/geometrical-interpretation-of-ricci-and-riemann-tensors.879040/
A Geometrical interpretation of Ricci and Riemann tensors? Tags: 1. Jul 15, 2016 Victor Alencar I do not get the conceptual difference between Riemann and Ricci tensors. It's obvious for me that Riemann have more information that Ricci, but what information? Riemann tensor appears when you compare the change of the sabe vector(or other tensor) when it takes two different paths. You can see it comutanting two differents cov. derivatives of a vector ou computing the parallel displacement. Studying general relativity I saw : "If Riemann tensor is zero, the space is flat; if the Ricci tensor is zero the space is empty". Someone knows some mathematical proof of this affirmation? And what the Ricci scalar say to us? It's always directly proporcional to curvature? 2. Jul 16, 2016 Markus Hanke Ricci can be taken as the trace of the Riemann tensor, hence it is of lower rank, and has fewer components. If you have a small geodesic ball in free fall, then ( ignoring shear and vorticity ) the Ricci tensor tells you the rate at which the volume of that ball begins to change, whereas the Riemann tensor contains information not only about its volume, but also about its shape. If the Riemann tensor is zero, then the equation of geodesic deviation reduces to the equation of a straight line, meaning that the separation vector between geodesics is constant. Hence, initially parallel lines will remain parallel everywhere - you are dealing with a flat manifold. As for empty space, this is just a consequence of the Einstein equations. If you write them in trace-reversed form, and set T=0 ( empty space ), you get a vanishing Ricci tensor. Hence, empty space implies Ricci flatness. The Ricci scalar is the trace of the Ricci tensor, and it is a measure of scalar curvature. It can be taken as a way to quantify how the volume of a small geodesic ball ( or alternatively its surface area ) is different from that of a reference ball in flat space. Perhaps you might find this helpful : http://arxiv.org/pdf/gr-qc/0401099.pdf
2017-08-19 03:05:04
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9458785057067871, "perplexity": 401.332234413687}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105291.88/warc/CC-MAIN-20170819012514-20170819032514-00667.warc.gz"}
https://meta.mathoverflow.net/questions/3201/currently-is-it-possible-to-have-relevant-question-under-the-latex-tag
# Currently, is it possible to have relevant question under the latex tag A question using the latex tag was recently posted, which received a number of down votes and comments which directed to post on tex.stackexchange.com. MO predates tex.stackexchange.com and so at that earlier point the MO community set up a latex tag in order to have some place to get latex help. My question is there any way to post a relevant question under the latex tag now? I don't see how this is possible given the tag info: https://mathoverflow.net/tags/latex/info Please note that there is a Q&A-site dedicated to this subject http://tex.stackexchange.com [.] Most questions involving LaTeX are a better fit there, and if asked here, might still be migrated to the other site. Most of the existing questions with this tag predate the existence of the other site, they are not a good indicator for which questions now would remain on this site. • That particular question was quite terrible irrespective of the issue if TeX questions are on-topic on MO. I don't think it would fare much better at tex.stackexchange.com . – Emil Jeřábek Apr 10 '17 at 20:41 • What @Emil said. It shouldn't have been migrated under the rule "don't migrate crap". I expected it to bounce back, and get removed by the Roomba services. (In fact, it is already deleted from TeX - LaTeX.) – Asaf Karagila Apr 11 '17 at 6:01
2019-04-23 19:05:24
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8061330914497375, "perplexity": 1199.1237339350464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578610036.72/warc/CC-MAIN-20190423174820-20190423200820-00094.warc.gz"}
https://plainmath.net/24289/prove-that-1-plus-frac-1-tan-2a-1-plus-frac-1-cot-2a-equal-frac-1-sin-2a
# Prove that: (1+\frac{1}{\tan^2A})(1+\frac{1}{\cot^2A})=\frac{1}{\sin^2A Prove that: $\left(1+\frac{1}{{\mathrm{tan}}^{2}A}\right)\left(1+\frac{1}{{\mathrm{cot}}^{2}A}\right)=\frac{1}{{\mathrm{sin}}^{2}A-{\mathrm{sin}}^{4}A}$ You can still ask an expert for help ## Want to know more about Trigonometry? • Questions are typically answered in as fast as 30 minutes Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it Clara Reese We use the basic trigonometry formula to solve the equation. Prove: $\left(1+\frac{1}{{\mathrm{tan}}^{2}A}\right)\left(1+\frac{1}{{\mathrm{cot}}^{2}A}\right)=\frac{1}{{\mathrm{sin}}^{2}A-{\mathrm{sin}}^{4}A}$ $L.H.S.=\left(1+\frac{1}{{\mathrm{tan}}^{2}A}\right)\left(1+\frac{1}{{\mathrm{cot}}^{2}A}\right)$ $=\left(\frac{{\mathrm{tan}}^{2}A+1}{{\mathrm{tan}}^{2}A}\right)\left(\frac{{\mathrm{cot}}^{2}A+1}{{\mathrm{cot}}^{2}A}\right)$ $=\left(\frac{{\mathrm{sec}}^{2}A}{{\mathrm{tan}}^{2}A}\right)\left(\frac{{\mathrm{csc}}^{2}A}{{\mathrm{cot}}^{2}A}\right)$ $=\left(\frac{\frac{1}{{\mathrm{cos}}^{2}A}}{\frac{{\mathrm{sin}}^{2}A}{{\mathrm{cos}}^{2}A}}\right)\left(\frac{\frac{1}{{\mathrm{sin}}^{2}A}}{\frac{{\mathrm{cos}}^{2}A}{{\mathrm{sin}}^{2}A}}\right)$ $=\left(\frac{1}{{\mathrm{sin}}^{2}A}\right)\left(\frac{1}{{\mathrm{cos}}^{2}A}\right)$ $=\left(\frac{1}{{\mathrm{sin}}^{2}A}\right)\left(\frac{1}{1-{\mathrm{sin}}^{2}A}\right)$ $=\frac{1}{{\mathrm{sin}}^{2}A-{\mathrm{sin}}^{4}A}$ L.H.S=R.H.S Hence prove.
2022-05-20 07:21:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 20, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.820995569229126, "perplexity": 3175.1791883899155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662531762.30/warc/CC-MAIN-20220520061824-20220520091824-00070.warc.gz"}
http://math.stackexchange.com/questions/222501/a-tangent-a-few-doubts
# A tangent, a few doubts I have a question: A tangent, I am told is a line through a specified point (a, f(a)) which touches no other point in atleast one neighbourhood of 'a'. How do I prove from here that a tangent is also a line of best approximation of the function in that neighbourhood? Thanking all in anticipation :) Edit: Another question that comes to my mind is: Why can't there be more than one such lines which suffice to be the "best approximation" of the unique function ? i got this question while reading arturo's pleasant answer. - It is not technically true that a tangent is a line which touches a curve at a single point. This is perhaps the intuitive starting point for the modern definition of a tangent but there are some issues. If you take for example, the absolute value function, then you have many lines "touching" the function at the origin but none of these are true "tangents". On the other hand, if you take a linear function then the tangent at any point is just the function itself. In this case, the tangent and the function intersect at infinitely many points. The answer to your question, depending on how you look at it, may be stated as a tautology. A tangent in the modern sense of the definition is a line of best approximation simply because it is defined that way. Given a differentiable function $f$, we define the tangent line of the curve at $x_0$ to be the line passing through the point $\left(x_0,\ f(x_0)\right)$ with slope $f'(x_0)$. So let me rephrase the question slightly and ask: Why does the derivative provide the "slope of best approximation"? The answer to that question falls under what it means for a function to be differentiable. The definition of the derivative is given as a linear approximation. The derivative of a differentiable function at a point $x_0$ is a number $f'(x_0)$ such that $$f(x_0 + h) = f(x_0) + f'(x)h + \epsilon(h)$$ where $\epsilon(h)$ is a remainder function. The remainder function represents the error in the approximation at a distance $h$ from the site of approximation and it needs to satisfy $$\lim_{h\rightarrow 0}\frac{\epsilon(h)}{h} = 0$$ So that the error is much smaller than the distance to the point of approximation. Intuitively, this condition is what characterizes the derivative as a good approximation. In this sense, the tangent is the best linear approximation because it is the only line which will satisfy the above property. The uniqueness follows from the uniqueness of a limit; the above equation implies $$\lim_{h\rightarrow 0}\frac{f(x_0+h)-f(x_0)}{h} = f'(x_0) + \lim_{h\rightarrow 0}\frac{\epsilon(h)}{h} = f'(x_0)$$ so that the derivative is uniquely defined as the above limit. It is not the best approximation in general though. You can keep adding on successive, higher order terms by picking apart the error function to get finer and finer approximations. A second order (quadratic) approximation would look like $$f(x_0 + h) = f(x_0) + f'(x_0)h + \frac{1}{2}f''(x_0)h^2 + \epsilon_2(h)$$ where $\epsilon_2$ is an even smaller remainder. You can successively define better and better approximations which leads into the concept of Taylor polynomials and Taylor series. - +1, excellent point about tangent definition. –  Emmad Kareem Oct 28 '12 at 6:47 You need dollar signs $around the tex code. And it must satisfy this because that's more or less the definition of the derivative. – EuYu Oct 28 '12 at 8:36 i'm still thinking about the limit.. – The cat with 9 wives Oct 28 '12 at 8:37 Well, think about it this way. For the traditional derivative to hold as a limit, when we rearrange the linear approximation, we end up with right-hand side $$f'(x_0) + \frac{\epsilon(h)}{h}$$ To require that the limit exist and to be equal to$f'(x_0)$, we necessarily need $$\lim_{h\rightarrow 0}\frac{\epsilon(h)}{h} = 0$$ – EuYu Oct 28 '12 at 8:42 Well by definition, suppose $$f'(x_0) = \lim_{\Delta x \to 0} \frac{f(x_0 + \Delta x) - f(x_0)}{\Delta x}$$. Therefore, at$(x_0, f(x_0))$, there is line that is tangent to$f$. At$x_0$, we can explicitly write down the equation of the line through this point. You can find this line given that the slope is$f'(x_0)$. So this line is very nice to approximate the function$f\$ (which may be a really complicated one). A line is something much nice to work with! - The question doesn't ask why a tangent is nice to work with, it asks why a tangent a nice approximation. –  EuYu Oct 28 '12 at 6:16 what should I do now? –  Chasky Oct 28 '12 at 6:18
2014-07-26 03:45:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8919780850410461, "perplexity": 178.00495302555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894976.0/warc/CC-MAIN-20140722025814-00231-ip-10-33-131-23.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/work-done-problem.779929/
# Work done problem ## Homework Statement There is a block of weight mg sitting on a horizontal table with a coefficient of kinetic friction u, at what angle to the horizontal should one direct a driving force to minimise work done in moving the block a horizontal distance of 10m with nonzero velocity and what is the magnitude of that work? W=integral(F.ds) F=uR ## The Attempt at a Solution Looking at this problem I cannot see why it would not be 90degrees and zero work being done because the direction of motion would be perpendicular to the force so F.ds is 0. But this is a 7 mark question surely that explanation isn't worth 7 marks have I missed something? Thank you Doc Al Mentor Looking at this problem I cannot see why it would not be 90degrees and zero work being done because the direction of motion would be perpendicular to the force so F.ds is 0. That would certainly minimize the work! But would that allow you to move the block as required? That would certainly minimize the work! But would that allow you to move the block as required? So would you say 89.999999999.......? Doc Al Mentor On second thought, I think you are right. (For all practical purposes: yes, you'd need some slight horizontal component.) I suspect the person creating the problem didn't realize this. (I suspect this is not a textbook problem.) On second thought, I think you are right. (For all practical purposes: yes, you'd need some slight horizontal component.) I suspect the person creating the problem didn't realize this. (I suspect this is not a textbook problem.) It is a college assessed problem bit annoying really
2021-06-16 08:14:17
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8631978631019592, "perplexity": 457.1775430655601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487622234.42/warc/CC-MAIN-20210616063154-20210616093154-00538.warc.gz"}
https://cs.stackexchange.com/questions/54104/dividing-bins-into-segments
# Dividing bins into segments This may be a question with a well known answer, but I've been thinking on it for two days, and can't quite come up with a satisfactory answer. Consider the problem of dividing $p n$ bins numbered $1$ through $pn$ into $p m + 1$ segments by placing $pm$ balls. If we let $k = pn - pm \bmod pm+1$, then we can show that we may attain a placement of the $pm$ balls such that there are exactly $(pm+1) - k$ segments of empty bins of length $\lfloor \frac{pn-pm}{pm+1} \rfloor$ and $k$ segments of empty bins of length $\lceil \frac{pn-pm}{pm + 1} \rceil$. Here's the tricky question: can we accomplish this task while ensuring that there are exactly $m$ balls in each interval $[(j-1)n + 1, jn]$, for $j \in \{1, 2, \dots, p\}$? Every concrete example I work through answers in the affirmative, but I can't seem to get an algorithmic way of doing it, or mathematical proof that one such arrangement exists. For particular examples, it always seems to work. See an example below with $p = 2,$ $n = 7,$ $m = 2.$ 1| |X| | |X|1|X| | |X| | | where 1 denotes the beginning of a new 'period', and | denotes the 'wall' of a bin, and X denotes a ball. Note that 1 and | both denote 'walls' of bins. • Your example is actually not regular. You want ||X|||X|||X|||X||. Mar 25 '16 at 0:34 • Not sure I agree with that; my example has $pn = 14$ bins, with $7$ in each period. Yours appears to only have $12$ bins? Mar 31 '16 at 14:08 • I count 14. Are you counting the extreme ones? Mar 31 '16 at 18:35 • Ah, no I was not. I only counted bins as spaces with lines on either side. So, I missed the two most extreme bins in your example. Thanks! Apr 5 '16 at 18:09 For $0 \leq i \leq pm+1$, define $$x_i = \left\lfloor i \frac{pn+1}{pm+1} \right\rfloor.$$ We put ball $i$ in bin $i$ for $1 \leq i \leq pm$. The length of the $i$th space (for $1 \leq i \leq pm+1$) is $$x_i - x_{i-1} = \left\lfloor i \frac{pn+1}{pm+1} \right\rfloor - \left\lfloor (i-1) \frac{pn+1}{pm+1} \right\rfloor \in \left\{ \left\lfloor \frac{pn+1}{pm+1} \right\rfloor, \left\lceil \frac{pn+1}{pm+1} \right\rceil \right\},$$ so your first condition is satisfied (the spaces are as equal to each other as possible). For your second condition, we need to verify that $1 \leq x_1,\ldots,x_m \leq n$, $n+1 \leq x_{m+1},\ldots,x_{2m} \leq 2n$, and so on. It's clearly enough to verify that $x_{Cm} \leq Cn$ for $1 \leq C \leq p$ and that $x_{Cm+1} > Cn$ for $0 \leq C < p$. For the first condition, we have $$x_{Cm} = \left\lfloor Cm \frac{pn+1}{pm+1} \right\rfloor < Cm \frac{pn+1}{pm+1} + 1 = \frac{Cpmn + Cm + pm + 1}{pm+1} \\ < \frac{Cpmn + Cn + pm + 1}{pm+1} = Cn+1,$$ using your implicit assumption $m < n$. For the second condition, $$x_{Cm+1} = \left\lfloor (Cm+1) \frac{pn+1}{pm+1} \right\rfloor \geq (Cm+1) \frac{pn+1}{pm+1} = \frac{Cpmn + Cm + pn + 1}{pm+1} \\ \geq \frac{Cpmn + Cn + pm + 1}{pm+1} = Cn+1,$$ using $Cm + pn \geq Cn + pm$, which follows from $(p-C)(n-m) \geq 0$.
2021-09-24 12:55:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9241784811019897, "perplexity": 293.6553630564565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057524.58/warc/CC-MAIN-20210924110455-20210924140455-00313.warc.gz"}
http://math.uni.lu/eml/projects/primes-in-p.html
## Primes is in P Goal: For many purposes in cryptography it is essential to have `big' prime numbers; for instance, in the very widely used cryptosystem RSA (see lecture on Number Theory and Cryptography) one uses prime numbers having 2048 binary digits (i.e. of the size 2^2048,having 617 decimal digits). The question thus arises of how to find such big prime numbers. The answer is that one takes random numbers and tests if they are prime. One hence needs a fast test whether a given integer n is prime; such a test is called a primality test. It is important to remark that a primality test only tests if an integer n is prime, it does not yield its decomposition into prime factors. This latter question is computationally very hard to solve (the security of RSA depends on this hardness). It has been known for some time that probabilistic primality testing is possible in time polynomial in the size of n (this means that the run time is bounded from above by a polynomial in \log(n), i.e. it can be bounded by \log(n)^m for some m \in N); for instance, the Miller-Rabin test (see Number Theory and Cryptography) achieves this. By probabilitistic primality testing one means that the result is true with a very high chance, but is not proved to be true. It was a great surprise when in 2002 Agrawal, Kayal and Saxena found a primality test (called AKS-test) that runs in time polynomial in \log(n) (like Miller-Rabin) and yields proved results. Literature: • Agrawal, Manindra; Kayal, Neeraj; Saxena, Nitin (2004). PRIMES is in P. Annals of Mathematics 160 (2): 781--793. doi:10.4007/annals.2004.160.781 Participants: Luca Notarnicola. Supervisors: Gabor Wiese. Difficulty level: Bachelor Thesis. Results: Bachelor Thesis
2017-11-25 07:37:52
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9398220181465149, "perplexity": 804.2411428155739}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809695.97/warc/CC-MAIN-20171125071427-20171125091427-00143.warc.gz"}
http://physics.stackexchange.com/questions/21104/kinetic-energy-in-collisions
# kinetic energy in collisions We were hoping you could help us understand collision energy. Vehicle A is driving West at 35mph and weighs 1437kg. Vehicle B is driving North at 35 mph and weighs 1882kg. Vehicle B crashes into the side (frontal) of vehicle A. What is the amount of energy absorbed into Vehicle A? What is the difference in kinetic energy if vehicle A is stopped? Any help you could give us in understanding the physics of collisions would be wonderful. What effect does it pose upon the engine compartment and components? - A simple formula for energy absorbed by body/lost as heat in a collision is $\frac{1}{2}\muv_{rel}^2(1-e^2)$, where e is the coefficient of restitution, $\mu$ is the reduced mass ($\frac{m_1m_2}{m_1+m+2}$). But i think you're looking for something more complicated here. –  Manishearth Feb 17 '12 at 2:23 Remember energy is the area under the force-displacement curve. –  ja72 Feb 17 '12 at 6:39 The absorbed energy depends on a lot more variables than just velocity and mass. Some will be used up in friction, some in deforming the vehicles, the rest will go into the kinetic energy after the collision as the vehicles bounce off in different directions. It is easy though to calculate an upper limit: $E_{max} = E_{kinetic,A} + E_{kinetic,B}$. –  Alexander Feb 17 '12 at 19:44
2014-03-12 19:53:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3697638213634491, "perplexity": 735.4044230753532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394023865238/warc/CC-MAIN-20140305125105-00088-ip-10-183-142-35.ec2.internal.warc.gz"}
http://www.faqs.org/qa/qa-10969.html
Search the Q&A Archives # i've recently switched from a windows based environment to... << Back to: TeX, LaTeX, etc.: Frequently Asked Questions with Answers [Monthly] Question by awp Submitted on 1/12/2004 Related FAQ: TeX, LaTeX, etc.: Frequently Asked Questions with Answers [Monthly] Rating: Not yet rated Rate this question: N/A Worst Weak OK Good Great i've recently switched from a windows based environment to linux environment and my latex document (which was previously ok) now gives an error message when i compile: ---------------------------------------- ! LaTeX Error: Missing \begin{document}. See the LaTeX manual or LaTeX Companion for explanation. Type H for immediate help. ... 1.1;     ; This buffer is for notes you don't want to save, and for Lisp evaluat... ? ---------------------------------------- but there is a \begin{document} and now the dvi file has an extra page at the beginning with page number Any advice?? thanks! Your answer will be published for anyone to see and rate.  Your answer will not be displayed immediately.  If you'd like to get expert points and benefit from positive ratings, please create a new account or login into an existing account below.
2013-12-10 22:44:49
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8837345242500305, "perplexity": 7234.925190974911}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164026971/warc/CC-MAIN-20131204133346-00006-ip-10-33-133-15.ec2.internal.warc.gz"}
http://clay6.com/qa/23410/what-is-effective-capacitance-between-a-and-b
# What is effective capacitance between A and B $(A)\;C \\ (B)\;\frac{5}{3}C \\ (C)\; \frac{3}{5}C \\ (D)\;None$ $C_{23} =C_2 +C_3$ $\qquad= C+ C$ $\qquad= 2C$ $C_{123} =\frac{ C_1 \times C_23}{C_1 + C_23}= \large\frac{C \times 2C}{3C}$ $C_{AB}=C_{123} +C_4$ $\qquad= \large\frac{2C}{3} +c$ $\qquad= \large\frac{5C}{3}$ Hence B is the correct answer.
2018-04-20 07:06:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8606263399124146, "perplexity": 724.8384262354294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937161.15/warc/CC-MAIN-20180420061851-20180420081851-00568.warc.gz"}
https://bookdown.org/sestelo/sa_financial/how-to-evaluate-the-ph-assumption.html
## 3.6 How to evaluate the PH assumption? Now we are going to illustrate two methods to evaluate the proportional hazards assumptions: one graphical approach and one goodness-of-fit test. Recall that the Hazard Ratio that compares two specifications of the covariates (defined as $$\textbf{X}^*$$ and $$\textbf{X}$$) can be expressed as $HR = \exp(\sum_{j=1}^p \beta_j (X_j^* - X_j))$ where $$\textbf{X}^*=(X_1^*, X_2^*, \ldots, X_j^*)$$ and $$\textbf{X}=(X_1, X_2, \ldots, X_j)$$, and proportionally of hazards assumption indicates that this quantity is constant over time. Equivalently, this means that the hazard for one individual is proportional to the hazard for any other individual, where the proportionality constant is independent of time. It is important to note that if the graph of the hazards cross for two or more categories of a predictor of interest, the PH assumption is not met. However, althought the hazard functions do not cross, it is possible that the PH assumption is not met. Thus, rather than checking for crossing hazards, we need to use other apporaches. ### 3.6.1 Graphical approach The most popular graphical techniques for evaluating the PH assumption involves comparing estimated –ln(–ln) survival curves over different (combinations of) categories of variables being investigated. A log–log survival curve is simply a transformation of an estimated survival curve that results from taking the natural log of an estimated survival probability twice.5 As we said, the hazard function can be rewritten as $S(t|\textbf X) = \bigg[ S_0(t) \bigg]^{e^{\sum_{j=1}^p \beta_j X_j}}$ and once we applied the -ln(-ln), the expression can be rewritten as $-\ln \bigg[-\ln S(t|\textbf X) \bigg] = - \sum_{j=1}^p \beta_j X_j - \ln \bigg[-\ln S_0(t|\textbf X) \bigg].$ Now, considering two different specifications of the covariates, corresponding to two different individuals, $$\textbf X_1$$ and $$\textbf X_2$$, and subtracting the second log–log curve from the first yields the expression $-\ln \bigg[-\ln S(t|\textbf X_1) \bigg] = -\ln \bigg[-\ln S(t|\textbf X_2) \bigg] + \sum_{j=1}^p \beta_j (X_{1j} - X_{2j})$ This expression indicates that if we use a Cox model (well-used) and plot the estimated log-log survival curves for individuals on the same graph, the two plots would be approximately parallel. The distance between the two curves is the linear expression involving the differences in predictor values, which does not involve time. Note that there is an important problem associated with this approach, that is, how to decide “how parallel is parallel?”. This fact can be subjective, thus the proposal is to be conservative for this decision by assuming the PH assumption is satisfied unless there is strong evidence of nonparallelism of the log–log curves. Now we are going to check the proportinal hazards assumption for the variable IsBorrowerHomeowner. This can be done by plotting log-log Kaplan Meier survival estimates against time (or against the log of time) and evaluating whether the curves are reasonably parallel. km_home <- survfit(Surv(time, status) ~ IsBorrowerHomeowner, data = loan_filtered) #autoplot(km_home) # just to see the km curves plot(km_home, fun = "cloglog", xlab = "Time (in days) using log", ylab = "log-log survival", main = "log-log curves by clinic") # another option ggsurvplot(km_home, fun = "cloglog") It seems that the proportional hazards assumption is violated as the log-log survival curves are not parallel. Another graphical option could be to use the Schoenfeld residuals to examine model fit and detect outlying covariate values. Shoenfeld residuals represent the difference between the observed covariate and the expected given the risk set at that time. They should be flat, centered about zero. You can see the explanation in this paper. The main idea is that he defined a partial residual as the different between the observed value of $$X_i$$ and its conditional expectation given the risk set $$R_i$$ and demostrated that these residuals have to be independent of the time. So, if you represent them ranked by its event time, this plot must not show any pattern. ggcoxdiagnostics(m2, type = "schoenfeld") # another option zph <- cox.zph(m2) par(mfrow = c(1, 2)) plot(zph, var = 1) plot(zph, var = 2) ### 3.6.2 Goodness-of-fit test A second approach for assessing the PH assumption involves goodness-of-fit (GOF) tests. To this end, different test have been proposed in the literature (Grambsch and Therneau 1994). We focuss in the Harrell (1986), a variation of a test originally proposed by Schoenfeld (1982). This is a test of correlation between the Schoenfeld residuals and survival time. A correlation of zero indicates that the model met the proportional hazards assumption (the null hypothesis). This can be applied by means of the cox.zph function of the survival package. cox.zph(m2) ## rho chisq p ## LoanOriginalAmount2 0.130 27.1 1.96e-07 ## IsBorrowerHomeownerTrue 0.103 14.0 1.81e-04 ## GLOBAL NA 49.3 1.96e-11 It seems again that the proportional hazards assumption is not satisfied (as we saw with the log-log survival curves). ### References Grambsch, Patricia M., and Terry M. Therneau. 1994. “Proportional Hazards Tests and Diagnostics Based on Weighted Residuals.” Biometrika 81 (3). Oxford University Press: 515–26. doi:10.1093/biomet/81.3.515. Harrell, F. 1986. “The Phglm Procedure.” In SAS Supplemental Library User’s Guide, Version 5. Cary, NC: SAS Institute Inc. Schoenfeld, David. 1982. “Partial Residuals for the Proportional Hazards Regression Model.” Biometrika 69 (1): 239–41. doi:10.1093/biomet/69.1.239. 1. Note that the scale of the y-axis of an estimated survival curve ranges between 0 and 1, whereas the corresponding scale for a -ln(-ln) curve ranges between $$-\infty$$ and $$+\infty$$.
2020-02-20 11:28:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8692772388458252, "perplexity": 1354.5920859461248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144722.77/warc/CC-MAIN-20200220100914-20200220130914-00083.warc.gz"}
https://tqft.math.tecnico.ulisboa.pt/seminars?year=2006
# 1998 seminars ## 02/06/2006, Friday, 14:00–15:00 João Faria Martins, Instituto Superior Técnico Crossed Modules and Crossed Complexes in Geometric Topology I This short course aims at describing some applications of crossed modules and crossed complexes to Geometric Topology, and it is based on results by the author. The background is R. Brown and P.J. Higgins beautiful work on crossed modules and crossed complexes. We will give a lot of attention to applications to knotted embedded surfaces in S^4, and we will make explicit use of movie representations of them. Some of the ideas of this work started from Yetter's Invariant of manifolds and subsequent developments. Full summary and references: http://www.math.ist.utl.pt/~rpicken/tqft/kauffman062006/CMGT.pdf ## 02/06/2006, Friday, 15:30–16:30 Louis Kauffman, Univ. Illinois, Chicago Virtual Knot Theory I Introduction to combinatorial knot theory; Reidemeister moves, moves on virtuals; interpretation of virtual knot theory in terms of knots and links in thickened surfaces; bracket polynomial for virtuals, involutory quandle for virtuals. ## 05/06/2006, Monday, 15:30–16:30 Marko Stosic, Instituto Superior Técnico Khovanov homology of torus knots In this talk we show that the torus knots ${T}_{p,q}$ for $3\le p\le q$ are homologically thick. Furthermore, we show that we can reduce the number of twists $q$ without changing a certain part of the homology, and consequently we show that there exists a stable homology for torus knots conjectured in [1]. Also, we calculate the Khovanov homology groups of low homological degree for torus knots, and we conjecture that the homological width of the torus knot ${T}_{p,q}$ is at least $p$. References: [1] N. Dunfield, S. Gukov and J. Rasmussen: The Superpolynomial for link homologies, arXiv:math.GT/0505056. [2] M. Stosic: Homological thickness of torus knots, arXiv:math.GT/0511532 ## 06/06/2006, Tuesday, 15:00–16:00 Louis Kauffman, Univ. Illinois, Chicago Virtual Knot Theory II Continuing discussion of invariants of virtual knots and links. Biquandles and 0-level Alexander polynomial. Quaternionic biquandle. Weyl algebra and the linear non-commutative biquandles. ## 06/06/2006, Tuesday, 17:00–18:00 João Faria Martins, Instituto Superior Técnico Crossed Modules and Crossed Complexes in Geometric Topology II This short course aims at describing some applications of crossed modules and crossed complexes to Geometric Topology, and it is based on results by the author. The background is R. Brown and P.J. Higgins beautiful work on crossed modules and crossed complexes. We will give a lot of attention to applications to knotted embedded surfaces in S^4, and we will make explicit use of movie representations of them. Some of the ideas of this work started from Yetter's Invariant of manifolds and subsequent developments. Full summary and references: http://www.math.ist.utl.pt/~rpicken/tqft/kauffman062006/CMGT.pdf ## 07/06/2006, Wednesday, 14:30–15:30 Louis Kauffman, Univ. Illinois, Chicago Virtual Knot Theory III Flat virtuals and long flat virtuals. Khovanov homology and virtual knot theory. ## 09/10/2006, Monday, 11:00–12:00 Mark Gotay, Univ. of Hawai at Manoa Stress-Energy-Momentum Tensors J. Marsden and I present a new method of constructing a stress-energy-momentum tensor for a classical field theory based on covariance considerations and Noether theory. Our stress-energy-momentum tensor ${T}^{\mu }{}_{\nu }$ is defined using the (multi)momentum map associated to the spacetime diffeomorphism group. The tensor ${T}^{\mu }{}_{\nu }$ is uniquely determined as well as gauge-covariant, and depends only upon the divergence equivalence class of the Lagrangian. It satisfies a generalized version of the classical Belinfante-Rosenfeld formula, and hence naturally incorporates both the canonical stress-energy-momentum tensor and the "correction terms" that are necessary to make the latter well behaved. Furthermore, in the presence of a metric on spacetime, our ${T}^{\mu \nu }$ coincides with the Hilbert tensor and hence is automatically symmetric. References: 1. Gotay, M. J. and J. E. Marsden [1992], Stress-energy-momentum tensors and the Belinfante-Rosenfeld formula, Contemp. Math. 132, 367-391. 2. Forger, M. and H. Römer [2004], Currents and the energy-momentum tensor in classical field theory: A fresh look at an old problem, Ann. Phys. 309, 306-389. ## 12/10/2006, Thursday, 16:00–17:00 Mark Gotay, Univ. of Hawai at Manoa Obstructions to Quantization 1 Quantization is not a straightforward proposition, as demonstrated by Groenewold's and Van Hove's discovery, sixty years ago, of an "obstruction" to quantization. Their "no-go theorems" assert that it is in principle impossible to consistently quantize every classical polynomial observable on the phase space ${R}^{2n}$ in a physically meaningful way. Similar obstructions have been recently found for ${S}^{2}$ and ${T}^{*}{S}^{1}$, buttressing the common belief that no-go theorems should hold in some generality. Surprisingly, this is not so-it has just been proven that there are no obstructions to quantizing either ${T}^{2}$ or ${T}^{*}{R}_{+}$. In this talk we conjecture-and in some cases prove-generalized Groenewold-Van Hove theorems, and determine the maximal Lie subalgebras of observables which can be consistently quantized. This requires a study of the structure of Poisson algebras of symplectic manifolds and their representations. To these ends we review known results as well as recent theoretical work. Our discussion is independent of any particular method of quantization; we concentrate on the structural aspects of quantization theory which are common to all Hilbert space-based quantization techniques. (This is joint work with J. Grabowski, H. Grundling and A. Hurst.) References: 1. Gotay, M. J. [2000], Obstructions to Quantization, in: Mechanics: From Theory to Computation. (Essays in Honor of Juan-Carlos Simo), J. Nonlinear Science Editors, 271-316 (Springer, New York). 2. Gotay, M. J. [2002], On Quantizing Non-nilpotent Coadjoint Orbits of Semisimple Lie Groups. Lett. Math. Phys. 62, 47-50. ## 13/10/2006, Friday, 14:00–15:00 Mark Gotay, Univ. of Hawai at Manoa Obstructions to Quantization 2 Let $\left(L,\nabla \right)$ be a prequantum line bundle over a symplectic manifold $X$, and $S$ its symplectization. Kostant showed that the classical Poisson bracket on $S$ is simply prequantization on $X$. C. Duval and I have taken this a step farther to obtain a quantization of $X$ using a generalized star-product on $S$. References: 1. Kostant, B. [2003], Minimal coadjoint orbits and symplectic induction, arXiv: SG/0312252.
2021-09-21 05:02:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 20, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6346563696861267, "perplexity": 1539.7053134535247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057158.19/warc/CC-MAIN-20210921041059-20210921071059-00287.warc.gz"}
http://www.fact-archive.com/encyclopedia/Natural_units
## Online Encylopedia and Dictionary Research Site Online Encyclopedia Search    Online Encyclopedia Browse # Planck units (Redirected from Natural units) In physics, Planck units are a system of units of measurement going back to Max Planck that is an early definition of natural units. The system is defined only using the following fundamental physical constants and is "natural" in the sense that the numerical values of these five universal constants become 1 when expressed in units of this system. Contents Constant Symbol Dimension speed of light in vacuum ${ c } \$ L1T-1 Gravitational constant ${ G } \$ M-1L3T-2 "reduced Planck's constant" or Dirac's constant $\hbar=\frac{h}{2 \pi}$ where ${h} \$ is Planck's constant ML2T-1 Coulomb force constant $\frac{1}{4 \pi \epsilon_0}$ where ${ \epsilon_0 } \$ is the permittivity in vacuum Q-2 M 1 L3 T-2 Boltzmann constant ${ k } \$ ML2T-2K-1 The Planck units are often semi-humorously referred to by physicists as "God's units". They eliminate anthropocentric arbitrariness from the system of units: some physicists believe that an extra-terrestrial intelligence might be expected to use the same system. Natural units can help physicists reframe questions. Perhaps Frank Wilczek said it best (June 2001 Physics Today) http://www.physicstoday.org/pt/vol-54/iss-6/p12.html : ...We see that the question [posed] is not, "Why is gravity so feeble?" but rather, "Why is the proton's mass so small?" For in Natural (Planck) Units, the strength of gravity simply is what it is, a primary quantity, while the proton's mass is the tiny number [1/(13 quintillion)]... The strength of gravity is simply what it is and the strength of the electromagnetic force simply is what it is. The electromagnetic force operates on a different physical quantity (electric charge) than gravity (mass) so it cannot be compared directly to gravity. To note that gravity is an extremely weak force is, from the point-of-view of natural units, like comparing apples to oranges. It is true that the electrostatic repulsive force between two protons (alone in free space) greatly exceeds the gravitational attractive force between the same two protons, and that is because the charge on the protons are approximately a natural unit of charge but the mass of the protons are far, far less than the natural unit of mass. Natural units have the advantage of simplifying many equations in physics by removing conversion factors. For this reason, they are popular in quantum gravity research. Newton's Law of universal gravitation $F = G \frac{m_1 m_2}{r^2}$ becomes $F = \frac{m_1 m_2}{r^2}$ using Planck units. Schrödinger's equation $- \frac{\hbar^2}{2m} \nabla^2 \psi(\mathbf{r}, t) + V(\mathbf{r}) \psi(\mathbf{r}, t) = i \hbar \frac{\partial \psi}{\partial t} (\mathbf{r}, t)$ becomes $- \frac{1}{2m} \nabla^2 \psi(\mathbf{r}, t) + V(\mathbf{r}) \psi(\mathbf{r}, t) = i \frac{\partial \psi}{\partial t} (\mathbf{r}, t)$ The energy of a particle or photon with radian frequency ${ \omega } \$ in its wave function ${ E = \hbar \omega} \$ becomes ${ E = \omega } \$ . Einstein's famous mass-energy equation ${ E = m c^2} \$ becomes ${ E = m } \$ (i.e. a body with a mass of 5000 Planck Mass units will have an intrinsic energy of 5000 Planck Energy units) and the full form ${ E^2 = (m c^2)^2 + (p c)^2} \$ becomes ${ E^2 = m^2 + p^2} \$ ${ G_{\mu \nu} = 8 \pi {G \over c^4} T_{\mu \nu}} \$ becomes ${ G_{\mu \nu} = 8 \pi T_{\mu \nu} } \$ . The unit of temperature is defined so that the mean amount of thermal kinetic energy carried per particle per degree of freedom of motion ${ E = \frac{1}{2} k T } \$ becomes ${ E = \frac{1}{2} T } \$ Coulomb's law $F = \frac{1}{4 \pi \epsilon_0} \frac{q_1 q_2}{r^2}$ becomes $F = \frac{q_1 q_2}{r^2}$ . Maxwell's equations $\nabla \cdot \mathbf{E} = \frac{1}{\epsilon_0}\rho$ $\nabla \cdot \mathbf{B} = 0$ $\nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}} {\partial t}$ $\nabla \times \mathbf{B} = \mu_0 \mathbf{J} + \mu_0 \epsilon_0 \frac{\partial \mathbf{E}} {\partial t}$ become $\nabla \cdot \mathbf{E} = 4 \pi \rho$ $\nabla \cdot \mathbf{B} = 0$ $\nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}} {\partial t}$ $\nabla \times \mathbf{B} = 4 \pi \mathbf{J} + \frac{\partial \mathbf{E}} {\partial t}$ when using Planck Units. (The $4 \pi \$ factors would have been eliminated if $\epsilon_0 \$ would have been normalized instead of the Coulomb Force Constant $1/(4 \pi \epsilon_0) \$.) ## Base Planck units By constraining the numerical values of the above 5 fundamental constants to be 1, then 5 base units for time, length, mass, charge, and temperature are defined. Name Dimension Expression Approx. SI equivalent measure Planck time Time (T) $t_P = \sqrt{\frac{\hbar G}{c^5}}$ 5.39121 × 10-44 s Planck length Length (L) $l_P = c \ t_P = \sqrt{\frac{\hbar G}{c^3}}$ 1.61624 × 10-35 m Planck mass Mass (M) $m_P = \sqrt{\frac{\hbar c}{G}}$ 2.17645 × 10-8 kg Planck charge Electric charge (Q) $q_P = \sqrt{\hbar c 4 \pi \epsilon_0}$ 1.8755459 × 10-18 C Planck temperature Temperature (ML2T-2/k) $T_P = \frac{m_P c^2}{k} = \sqrt{\frac{\hbar c^5}{G k^2}}$ 1.41679 × 1032 K ## Derived Planck units As in other systems of units, the following units of physical quantity are defined in terms of the base Planck units. Name Dimension Expression Approx. SI equivalent measure Planck energy Energy (ML2T-2) $E_P = m_P c^2 = \sqrt{\frac{\hbar c^5}{G}}$ 1.9561 × 109 J Planck force Force (MLT-2) $F_P = \frac{E_P}{l_P} = \frac{c^4}{G}$ 1.21027 × 1044 N Planck power Power (ML2T-3) $P_P = \frac{E_P}{t_P} = \frac{c^5}{G}$ 3.62831 × 1052 W Planck density Density (ML-3) $\rho_P = \frac{m_P}{l_P^3} = \frac{c^5}{\hbar G^2}$ 5.15500 × 1096 kg/m3 Planck angular frequency Frequency (T-1) $\omega_P = \frac{1}{t_P} = \sqrt{\frac{c^5}{\hbar G}}$ 1.85487 × 1043 rad/s Planck pressure Pressure (ML-1T-2) $p_P = \frac{F_P}{l_P^2} =\frac{c^7}{\hbar G^2}$ 4.63309 × 10113 Pa Planck current Electric current (QT-1) $I_P = \frac{q_P}{t_P} = \sqrt{\frac{c^6 4 \pi \epsilon_0}{G}}$ 3.4789 × 1025 A Planck voltage Voltage (ML2T-2Q-1) $V_P = \frac{E_P}{q_P} = \sqrt{\frac{c^4}{G 4 \pi \epsilon_0} }$ 1.04295 × 1027 V Planck impedance Resistance (ML2T-1Q-2) $Z_P = \frac{V_P}{I_P} = \frac{1}{4 \pi \epsilon_0 c} = \frac{Z_0}{4 \pi}$ 2.99792458 × 101 Ω ## Discussion At the "Planck scales" in length, time, density, or temperature, one must consider both the effects of quantum mechanics and general relativity. Unfortunately this requires a theory of quantum gravity which does not yet exist. Most of the Planck units are either too small or too large for practical use, unless prefixed with large powers of ten. They also suffer from uncertainties in the measurement of some of the constants on which they are based, especially of the gravitational constant ${G} \$ (which has an uncertainty of 1 to 7000). It might be interesting to note that the elementary charge measured in terms of the Planck charge comes out to be $e = \sqrt{\alpha} \ q_P = 0.085424543 \ q_P \$ where ${\alpha} \$ is the fine-structure constant $\alpha =\left ( \frac{e}{q_P} \right )^2 = \frac{e^2}{\hbar c 4 \pi \epsilon_0} = \frac{1}{137.03599911}$ . The dimensionless Fine-structure constant can be thought of as taking on the value that it does because of the amount of charge, measured in natural units (Planck charge), that electrons, protons, and other charged particles happen to have been assigned by nature herself. Because the electromagnetic force between two particles is proportional to the product of the charges of each particle (each which would, in Planck units, be proportional to $\sqrt{\alpha} \$), the strength of the electromagnetic force relative to other forces is proportional to ${\alpha} \$. The Planck impedance comes out to be the characteristic impedance of free space ${Z_0} \$ scaled down by $4 \pi \$ meaning that, in terms of Planck Units, that ${Z_0 = 4 \pi Z_P} \$. This factor comes from the fact that it is the Coulomb Force Constant $1/(4 \pi \epsilon_0) \$ in Coulomb's law that is normalized to 1, as is done in the cgs system of units, rather than the permittivity of free space $\epsilon_0 \$. This, and that fact that the gravitational constant ${G} \$ is normalized rather than ${4 \pi G} \$, could be considered to be an arbitrary definition and perhaps a non-optimal one from the perspective of defining the most natural physical units as the choice for Planck Units. ## Planck units and the invariant scaling of nature Referring to Duff, Okun, and Veneziano Trialogue on the number of fundamental constants http://xxx.lanl.gov/pdf/physics/0110060 (The operationally indistinguishable world of Mr. Tompkins), if all physical quantities (masses and other properties of particles) were expressed in terms of Planck units, those quantities would be dimensionless numbers (mass divided by the Planck mass, length divided by the Planck length, etc.) and the only quantities that we ultimately measure in physical experiments or in our perception of reality are dimensionless numbers. (When one commonly measures a length with a ruler or tape-measure, that person is actually counting tick marks on a given standard or is measuring the length relative to that given standard, which is a dimensionless value. It is no different for physical experiments, all physical quantities are measured relative to some other like dimensioned values.) We can notice a difference if some dimensionless physical quantity such as ${\alpha} \$ or the proton/electron mass ratio changes (atomic structures would change) but if all dimensionless physical quantities remained constant, we could not tell if a dimensionful quantity, such as the speed of light, c, has changed. And, indeed, the Tompkins concept becomes meaningless in our existence if a dimensionful quantity such as c has changed, even drastically. If the speed of light c were somehow suddenly cut in half and changed to c/2 (but with all dimensionless physical quantities continuing to remain constant), then the Planck Length would increase by a factor of $\sqrt{8} \$ from the point-of-view of some unaffected "god-like" observer on the outside. But then the size of atoms (approximately the Bohr radius) are related to the Planck length by an unchanging dimensionless constant: $a_0 = {{4\pi\epsilon_0\hbar^2}\over{m_e e^2}}= {{m_P}\over{m_e \alpha}} l_P$ Then atoms would be bigger (in one dimension) by $\sqrt{8} \$, each of us would be taller by $\sqrt{8} \$, and so would our meter sticks be taller (and wider and thicker) by a factor of $\sqrt{8} \$ and we would not know the difference. Our clocks would tick slower by a factor of $\sqrt{32} \$ (from the point-of-view of this unaffected "god-like" observer) because the Planck time has increased by $\sqrt{32} \$, but we would not know the difference. This hypothetical god-like observer on the outside might observe that light now travels at half the speed that it used to (as well as all other observed velocities) but it would still travel 299792458 of our new meters in the time elapsed by one of our new seconds. We would not notice any difference. This conceptually contradicts George Gamow in Mr. Tompkins who suggests that if a dimensionful universal constant such as c changed, we would easily notice the difference. We must then ask him, how would we measure the difference if our measuring standards also changed in the same way? ## Max Planck's discovery of the natural units Max Planck first listed his set of units (and gave values for them remarkably close to those used today) in May of 1899 in a paper presented to the Prussian Academy of Sciences. Max Planck: 'Über irreversible Strahlungsvorgänge'. Sitzungsberichte der Preußischen Akademie der Wissenschaften, vol. 5, p. 479 (1899) At the time he presented the units, quantum mechanics had not been invented. He himself had not yet discovered the theory of black-body radiation (first published December 1900) in which the Planck's Constant ${h} \$ made its first appearance and for which Planck was later awarded the Nobel prize. The relevant parts of Planck's 1899 paper leave some confusion as to how he managed to come up with the units of time, length, mass, temperature etc. which today we define using Dirac's Constant $\hbar \$ and motivate by references to quantum physics before things like $\hbar \$ and quantum physics were known. Here's a quote from the 1899 paper that gives an idea of how Planck thought about the set of units. ...ihre Bedeutung für alle Zeiten und für alle, auch ausserirdische und aussermenschliche Kulturen notwendig behalten und welche daher als "natürliche Masseinheiten" bezeichnet werden können... ...These necessarily retain their meaning for all times and for all civilizations, even extraterrestrial and non-human ones, and can therefore be designated as "natural units"...
2020-01-27 06:54:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 73, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8567659854888916, "perplexity": 713.9794701690505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694908.82/warc/CC-MAIN-20200127051112-20200127081112-00221.warc.gz"}
https://www.biostars.org/p/252325/
How to map reads to tiny reference using bowtie2 1 0 Entering edit mode 3.9 years ago whaiyu06 ▴ 70 Hello everyone! I have a trouble mapping the reads to a extremely tiny reference using bowtie/bowtie2. I have a mass of sequencing data (PE150),and I only want to focus 20 bp of the each 150 sequence, but I don't know the accurate location of the 20 bp need to notice in each sequence because of mutation and indel. At the same time, I have a expected library containing all possible sequences of the 20bp. So I want to use the expected library of the 20bp to build a reference,then mapping the reads to the constructed reference. Even I try many combinations of parameters,I didn't find the matched 20 bp sequence as the reference is too small. If you have any suggestions, please let me know .Thank you a lot. alignment • 1.7k views 0 Entering edit mode Best 0 Entering edit mode Thanks for your reply.It is not appropriate for bowtie2 to align the longer reads to short ref. I have used blast to search the aligned sequences,but don't consider the quality of sequences in this way. Furthermore,I don't understand how to operate the second strategy, you means it is also don't take the quality of bases into account by this means? As for the third means,I maybe not get any result if I add more than 130 Ns at both sides of my 20 bp sequence,as the score is too low to reach the threshold of the min-score. Bests 0 Entering edit mode For the 2nd solution if you use 150 your bases length as references quality base will not be used during the mapping (only for the 20 bases if you have the quality base sequencing ). May be you can try to filter reads and keep only good quality 150 bases reads as references. About the 3th solution it worked for me with N with bwa but with low stringency parameter (low score to capture INDEL). By the way if i remember well when i did that kind of alignment i chose bwa and not bowtie because of the seed chose to start the alignment, which is 20 bases for bowtie by default against 4 for bwa. Best 0 Entering edit mode 3.9 years ago At the very least you'll need to use local alignment (the --local option). You might also have to change --score-min, but see what you get with just local alignment. 0 Entering edit mode Dear Devon - This did not work for me. I have posted my question on this link https://www.biostars.org/p/474903/#474905 It would be great if you could help me with this. Thank you in advance
2021-04-14 20:10:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6035191416740417, "perplexity": 1145.483136758714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038078021.18/warc/CC-MAIN-20210414185709-20210414215709-00048.warc.gz"}
https://codeahoy.com/learn/aspnet/ch9/
# MVC - Update Layout The layout file at Views/Shared/_Layout.cshtml contains the “base” HTML for each view. This includes the navbar, which is rendered at the top of each page. To add a new item to the navbar, find the HTML code for the existing navbar items: Views/Shared/_Layout.cshtml <ul class="nav navbar-nav"> <li><a asp-area="" asp-controller="Home" asp-action="Index"> Home </a></li> </a></li> <li><a asp-area="" asp-controller="Home" asp-action="Contact"> Contact </a></li> </ul> Add your own item that points to the Todo controller instead of Home: <li> <a asp-controller="Todo" asp-action="Index">My to-dos</a> </li> The asp-controller and asp-action attributes on the <a> element are called tag helpers. Before the view is rendered, ASP.NET Core replaces these tag helpers with real HTML attributes. In this case, a URL to the /Todo/Index route is generated and added to the <a> element as an href attribute. This means you don’t have to hard-code the route to the TodoController. Instead, ASP.NET Core generates it for you automatically. If you’ve used Razor in ASP.NET 4.x, you’ll notice some syntax changes. Instead of using @Html.ActionLink() to generate a link to an action, tag helpers are now the recommended way to create links in your views. Tag helpers are useful for forms, too (you’ll see why in a later chapter). You can learn about other tag helpers in the documentation at https://docs.asp.net.
2022-12-01 09:45:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17011964321136475, "perplexity": 7723.293004287213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710808.72/warc/CC-MAIN-20221201085558-20221201115558-00642.warc.gz"}
https://www.khanacademy.org/math/cc-kindergarten-math/cc-kindergarten-counting/kindergarten-counting/e/one-more--one-less
# Find 1 more or 1 less than a number ### Problem There were 3 foxes. One fox ran away. How many foxes are left?
2017-07-22 16:58:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3440784513950348, "perplexity": 6973.415630027359}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424088.27/warc/CC-MAIN-20170722162708-20170722182708-00313.warc.gz"}
https://gimehirezaz.io-holding.com/cohomologie-p-adique-book-13433xt.php
Last edited by Tajora Monday, July 20, 2020 | History 1 edition of Cohomologie p-adique. found in the catalog. Written in English Subjects: • Dwork, Bernard M., • Homology theory. • Edition Notes The Physical Object ID Numbers Series Aste risque -- 119-120. Contributions Dwork, Bernard M., Socie te mathe matique de France. Pagination [5], 330 p. : Number of Pages 330 Open Library OL14263954M 1 INRIA Rocquencourt, Domaine de Voluceau, BP Le Chesnay Cedex, France. 2 LPTHIRM and Département d'Aéronautique, Université de Cited by: New updated edition by Yves Laszlo of the book Cohomologie locale des faisceaux coh\'erents et th\'eor\emes de Lefschetz locaux et globaux (SGA 2)'', Advanced Studies in Pure Mathematics 2, North-Holland Publishing Company - Amsterdam, [Ber96] Berthelot, P., Cohomologie rigide et cohomologie rigide à supports propres (Université de Rennes 1, Institut de Recherche Mathématique de Rennes [IRMAR], ). [Bij16] Bijakowski, S., Analytic continuation on Shimura varieties with 𝜇-ordinary locus, Algebra Number Theory 10 (), – /antCited by: 4. Please read our short guide how to send a book to Kindle. Save for later. Most frequently terms. lemma theorem banach adic proof remark modules rings proposition spa ring module norm algebra local spaces uniform rational map perfect. This book resulted from a research conference in arithmetic geometry held at Arizona State University in March The papers describe important recent advances in arithmetic geometry. Several articles deal with $$p$$-adic modular forms of half-integral weight and their roles in arithmetic geometry. Kaoru Hiraga and Hiroshi Saito, On restriction of admissible representations, Algebra and number theory, Hindustan Book Agency, Delhi, , pp. – MR [JL70] H. Jacquet and R. P. Langlands, Automorphic forms on ${\rm GL}(2)$, Lecture Notes in Mathematics, Vol. , Springer-Verlag, Berlin-New York, You might also like Windchimes Are Dancing Wild Windchimes Are Dancing Wild The Wisdom of Sarnoff and the world of RCA. The Wisdom of Sarnoff and the world of RCA. Views on the Serbo-Croatian language service at the Voice of America Views on the Serbo-Croatian language service at the Voice of America Company Policy Manual Company Policy Manual New Revised Standard Version Tab Indexed Large Print Leatherflex Black New Revised Standard Version Tab Indexed Large Print Leatherflex Black Raising funds Raising funds Influence of the Carnegie, Ford and Rockefeller Foundations on American Foreign Policy Influence of the Carnegie, Ford and Rockefeller Foundations on American Foreign Policy Communism: threat to freedom. Communism: threat to freedom. Activities report. Activities report. North Wales North Wales Riches of the Heart Riches of the Heart Electrical resistivity measurements, Midway and Kure Atolls Electrical resistivity measurements, Midway and Kure Atolls Nurses, patients, and pocketbooks Nurses, patients, and pocketbooks Secrets of Soviet science. Secrets of Soviet science. Portrait of Southern Africa. Portrait of Southern Africa. Additional Physical Format: Online version: Cohomologie p-adique. [Paris]: Société mathématique de France, (OCoLC) Named Person: Bernard M Dwork. COVID Resources. Reliable information about the coronavirus (COVID) is available from the World Health Organization (current situation, international travel).Numerous and frequently-updated resource results are available from this ’s WebJunction has pulled together information and resources to assist library staff as they consider how to handle. Abstract. On démontre dans cet article que le Théorème de Cohomologie p-adique. book pour une classe d'équations différentielles p-adiques sur la droite projective entraîne le Théorème de finitude de la cohomologie p-adique de Monsky-Washnitzer d'une variété affine non classe précédente d'équations est contenue dans une classe d'équations où le Théorème de l'indice Cited by: In mathematics, p-adic Hodge theory is a theory that provides a way to classify and study p-adic Galois representations of characteristic 0 local fields with residual characteristic p (such as Q p).The theory has its beginnings in Jean-Pierre Serre and John Tate's study of Tate modules of abelian varieties and the notion of Hodge–Tate –Tate representations. Borel A. () Cohomologie de certains groupes discretes et lapiacien p-adique [d'après H. Garland]. In: Séminaire Bourbaki vol. /74 Exposés – Lecture Notes Cited by: 6. P. Berthelot, Géométrie rigide et cohomologie des variétés algébriques de caractéristique p, Journées d'analyse p-adique (), in Introduction aux cohomologies p-adiques, Bull. Soc. Math. France, Mémo p. 7–32 (). MathSciNet Google ScholarCited by: Recently, the existence of Morse decompositions for nonautonomous dynamical systems was shown for three different time domains: the past, the. En combinant avec un résultat de C. Breuil, on obtient unthéo eme de comparaison entre la cohomologié etale p-torsion et la cohomologie log cristalline pour les Author: Sandra Rozensztajn. Pages from Volume (), Issue 2 by Gabriel Dospinescu, Arthur-César Le BrasCited by: 3. Journals. AMS peer-reviewed journals are of the highest quality in mathematical research. Our journals have been published since and cover a broad range of mathematics. Each journal is managed by editors who are prominent in their fields, and each is unique in its offering of articles, book reviews, and reports. Abstract. Conjectures on the existence of zero-cycles on arbitrary smooth projective varieties over number fields were proposed by Colliot-Thélène, Sansuc, Kato and Saito in the ’ by: Cohomologie cristalline des sch´emas de caract´eristique p>0, Lecture Notes in Math. ,Springer-Verlag, Cohomologie de de Rham et cohomologie ´etale p-adique (d’apres G. Faltings, J.-M. Fontaine et al.), S´eminaire Bourbaki, Exp.in: Aste´risque – (), pp. – The book of involutions. History. The Peccot lectures are among several manifestations organized at the Collège de France which are funded and managed by bequests from the family of Claude-Antoine Peccot, a young mathematician who died while aged Several successive donations to the foundation (in, and ) by Julie Anne Antoinette Peccot and Claudine Henriette Marguerite. The oldest mathematics journal in continuous publication in the Western Hemisphere, American Journal of Mathematics ranks as one of the most respected and celebrated journals in its field. Published sincethe Journal has earned its reputation by presenting pioneering mathematical papers. See Debarre's book ("Higher dimensional algebraic geometry"). Alternatively, take a look at Debarre's Bourbaki talk ("Varietes rationnellement connexes"). The idea is that a rationally connected variety has no holomorphic forms, so by Hodge theory the structure sheaf $\mathcal{O}_X$ is acyclic, implying $\chi(X,\mathcal{O}_X) = 1$. Wiesława Krystyna Nizioł (pronounced ['viɛswava 'krɨstɨna 'niziɔw]) is a Polish mathematician, director of research at CNRS, based at École normale supérieure de Lyon. Her research concerns arithmetic geometry, and in particular p-adic Hodge theory, Galois. This article studies a distinguished collection of so-called generalized Heegner cycles in the product of a Kuga–Sato variety with a power of a CM elliptic curve. Its main result is a p-adic analogue of the Gross–Zagier formula which relates the images of generalized Heegner cycles under the p-adic Abel–Jacobi map to the special values of certain p-adic Rankin L-series at Cited by: [33] A. Ducros - “Triangulations et cohomologie étale sur une courbe analytique”, article actuellement soumis. | Zbl [34] - “Cohomologie non ramifiée sur une courbe p-adique lisse”, Compositio Math. (), no. 1, p. | MR | Zbl *immediately available upon purchase as print book shipments may be delayed due to the COVID crisis. ebook access is temporary and does not include ownership of the ebook. Only valid for books with an ebook version. From the reviews of Vols. I-III: "Since their publication in J-P. Serre's Collected Papers have already become one of the classical references in mathematical research. This is on the one hand due to the completeness of the collection ( items) and on the other, of course, due to the beautiful and clear expositions of Serre's papers and their influence on. Séminaire d'Algèbre Paul Dubreil et Marie-Paule Malliavin Groupe de Lie p-adique, Immeuble et Cohomologie. Pages Séminaire d'Algèbre Paul Dubreil et Marie-Paule Malliavin Book Subtitle Proceedings. Paris (33ème Année) Editors. M.P. Malliavin; Series TitleBrand: Springer-Verlag Berlin Heidelberg.You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them., Free ebooks since This chapter presents a p-adic theory of hyperfunctions of several variables by using relative cohomologies of rigid analytic chapter reviews the general theory of relative cohomologies of rigid analytic spaces, and discusses the relation between the usual topology of K and the Grothendieck topology of X. A lemma on the relative cohomologies on a polydisk is Cited by:
2021-02-25 08:20:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37933939695358276, "perplexity": 7515.101457707576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350846.9/warc/CC-MAIN-20210225065836-20210225095836-00576.warc.gz"}
https://m.hanspub.org/journal/paper/12389
# 四元数四维间隔不变性和时空坐标变换Quaternion, Invariance of 4-Dimensional Interval and Transformations of Spacetime Coordinates By the use of quaternion, the transformations of space-time coordinates in special relativity are studied. 1) The general transformations in quaternion form are derived, which preserve the invariance of 4-dimensional interval, and it is found that the invariance of the interval can not determine the Lorentz transformation uniquely. 2) Based on the con-dition that preserves the invariance of time, the general transformations reduce to the first kind of special transforma-tions, in which the space rotations are included. 3) Based on the condition that preserves the invariance of a space coor-dinate, the general transformations reduce to the second kind of special transformations, in which the proper Lorentz transformations are included. It is pointed out that the quaternion form of Lorentz transformations in some literatures should be amended. 4) From the general transformations in quaternion form, two types of new transformations are introduced, which are discrete transformations, including identical, reflection and transposition ones, and unilateral transformations. These new transformations are different from the traditional space rotations and the normal Lorentz transformations. [1] A. 爱因斯坦等, 著, 赵志田, 刘一贯, 译. 相对论原理[M]. 北京: 科学出版社, 1980: 32-43. [2] W. 泡利, 著, 凌德洪, 周万生, 译. 相对论[M]. 上海: 上海科学技术出版社, 1979: 1-15. [3] 朗道•栗弗席兹, 著, 任朗, 袁炳南, 译. 场论[M]. 北京: 人民教育出版社, 1959: 13-17. [4] P. R. Girard. The quaternion group and modern physics. Euro- pean Journal of Physics, 1984, 5(1): 25-32. [5] A. Waser. Application of bi-quaternions in physics. 2007. www.andre-waser.ch/Publications/ApplicationOfBiQuaternionsInPhysics_EN.pdf [6] S. De Leo, G. Ducati. Quaternionic groups in physics: A panoramic reviewInternational Journal of Theoretical Physics, 1999, 38(8): 2197-2220. [7] 许方官. 四元数物理学[M]. 北京: 北京大学出版社, 2012: 16-24. [8] I. Abonyi, J. F. Bito and J. K. Tar. A qua-ternion representation of the Lorentz group for classical applications. Journal of Physics A: Mathematical and General, 1991, 24(14): 3245-3254. [9] S. De Leo. Quaternion and special relativity. Journal of Mathe- matical Physics, 1996, 37(6): 2955-2968. [10] M. S. Alam, S. Bauk. Quaternion Lorentz transformation. Phys- ics Essays, 2011, 24(2): 158-162. [11] 王振宇, 范文涛. Lorentz变换的四元数表示[J]. 数学物理学报, 2010, 30A(5): 1377-1381. [12] 陈光. 广义时空变换理论[J]. 汕头大学学报(自然科学版), 1994, 2: 15-26. [13] 丁光涛. 双四元数形式的电磁理论[J]. 中国科学:物理学•力学•天文学, 2012, 42(10): 1029-1039. [14] 丁光涛. 偏振光学的四元数方法[J]. 光学学报, 2013, 33(7): 0726001 [15] A. P. Yefremov. Quaternions: Algebra, geometry and physical theories. Hypercomplex Numbers in Geometry and Physics, 2004, 1: 104-119. [16] 肖尚彬. 四元数方法及其应用[J]. 力学进展, 1993, 23(2): 249-260. Top
2021-04-19 16:07:37
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8505704998970032, "perplexity": 8829.86264915705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038887646.69/warc/CC-MAIN-20210419142428-20210419172428-00493.warc.gz"}
http://quant.stackexchange.com/questions/9370/risks-of-issuing-an-autocallable-note/9371
# Risks of issuing an Autocallable Note Let's say that I'm issuing an Autocallable Note with the following features: Underlying: FTSE 100 Autocall Observation Frequency: Annual Observation Autocall Level: 100% of Initial Level of FTSE 100 (The Note autocalls if FTSE 100 goes above Autocall Level at Observation Dates) Annual Coupon: 10% (Coupon only pays out when the Note is autocalled. Coupon accumulates to next year if the note is not autocalled) Maturity: 6 years Knock-In Barrier: 60% of Initial Level of FTSE 100 (KI only observed at maturity. If the Note is neither Autocalled nor Knocked-In at maturity, investor get 100% money back) With the above Autocall Note, is the Issuer long or short vega? Please explain. Is the issuer long or short interest rates? Or is it more complicated when it comes to interest rate risks of an Autocall Note? Also, what other risks are there for the issuer? -
2014-04-23 10:29:40
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8863447904586792, "perplexity": 11765.779521382883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.groundai.com/project/equivariant-a-theory/
Equivariant A-theory # Equivariant A-theory Cary Malkiewich and Mona Merling ###### Abstract. We give a new construction of the equivariant -theory of group actions of Barwick et al., by producing an infinite loop -space for each Waldhausen category with -action, for a finite group . On the category of retractive spaces over a -space , this produces an equivariant lift of Waldhausen’s functor , and we show that the -fixed points are the bivariant -theory of the fibration . We then use the framework of spectral Mackey functors to produce a second equivariant refinement whose fixed points have tom Dieck type splittings. We expect this second definition to be suitable for an equivariant generalization of the parametrized -cobordism theorem. ## 1. Introduction Waldhausen’s celebrated construction, and the “parametrized -cobordism” theorem relating it to the space of -cobordisms on , provides a critical link in the chain of homotopy-theoretic constructions relating the behavior of compact manifolds to that of their underlying homotopy types [waldhausen1978alg] [waldhausennew]. While the -theory assembly map provides the primary invariant that distinguishes the closed manifolds in a given homotopy type, provides the secondary information that accesses the diffeomorphism and homeomorphism groups in a stable range [ww]. And in the case of compact manifolds up to stabilization, accounts for the entire difference between the manifold and its underlying homotopy type with tangent information [dww]. As a consequence, calculations of have immediate consequences for the automorphism groups of high-dimensional closed manifolds, and of compact manifolds up to stabilization. When the manifolds in question have an action by a group , there is a similar line of attack for understanding the equivariant homeomorphisms and diffeomorphisms. One expects to replace with an appropriate space of -isovariant -cobordisms on , stabilized with respect to representations of . The connected components of such a space would be expected to coincide with the equivariant Whitehead group of [luck], which splits as (1) WhG(X)≅⨁(H)≤GWh(XHhWH) where denotes conjugacy classes of subgroups. This splitting is reminiscent of the tom Dieck splitting for genuine -suspension spectra (Σ∞GX+)G≅⋁(H)≤GΣ∞+XHhWH and suggests that the variant of -theory most directly applicable to manifolds will in fact be a genuine -spectrum, whose fixed points have a similar splitting. In this paper we begin to realize this conjectural framework. We define an equivariant generalization of Waldhausen’s -theory functor, when is a space with an action by a finite group , whose fixed points have the desired tom Dieck style splitting. ###### Theorem 1.1 (LABEL:agx_exists2@). For a finite group, there exists a functor from -spaces to genuine -spectra with fixed points AG(X)G≃∏(H)≤GA(XHhWH), and a similar formula for the fixed points of each subgroup . To be more specific, the fixed points are the -theory of the category of finite retractive -cell complexes over , with equivariant weak homotopy equivalences between them. The splitting of this -theory is a known consequence of the additivity theorem, and an explicit proof appears in [wojciech]. In a subsequent paper, we plan to explain how fits into a genuinely -equivariant generalization of Waldhausen’s parametrized -cobordism theorem. The argument we have in mind draws significantly from an analysis of the fixed points of our carried out by Badzioch and Dorabiała [wojciech], and a forthcoming result of Goodwillie and Igusa that defines and gives a splitting that recovers (1). We emphasize that lifting these theorems to genuine -spectra permits the tools of equivariant stable homotopy theory to be applied to the calculation of , in addition to the linearization and trace techniques that have been used so heavily in the nonequivariant case. Most of the work in this paper is concerned with constructing equivariant spectra out of category-theoretic data. One approach is to generalize classical delooping constructions such as the operadic machine of May [MayGeo] or the -space machine of Segal [segal] to allow for deloopings by representations of . Using the equivariant generalization of the operadic infinite loop space machine from [GM3], we show how this approach generalizes to deloop Waldhausen -categories. The theory of Waldhausen categories with -action is subtle. Even when the -action is through exact functors, the fixed points of such a category do not necessarily have Waldhausen structure (LABEL:waldfixedpts@). Define be the category with objects the elements of and precisely one morphism between any two objects, whose classifying space is . Let be the category of all functors and all natural transformations with acting by conjugation; we define the homotopy fixed points of a -category as the fixed point category , and we explain in §LABEL:waldhausen_gcat how this category does have a Waldhausen structure. The “equivariant -theory of group actions” of Barwick, Glasman, and Shah produces a genuine -spectrum (using the framework of [Gmonster]) whose -fixed points are [Gmonster2, §8]. We complement this with a result that shows the -space may be directly, equivariantly delooped. ###### Theorem 1.2 (LABEL:inf_loop@ and LABEL:fixed_points_agree@). If is a Waldhausen -category then the -theory space defined as , where is Waldhausen’s construction from [waldhausen], is an equivariant infinite loop space. The -fixed points of the resulting --spectrum are equivalent to the -theory of the Waldhausen category for every subgroup . The downside of this approach is that one does not have much freedom to modify the weak equivalences in the fixed point categories. Note that if is a -space, then the category of homotopy finite retractive spaces over has a -action. For a retractive space , is defined by precomposing the inclusion map by and postcomposing the retraction map by . We can apply 1.2 to this category, and the resulting theory has as its -fixed points the -theory of -equivariant spaces over , as we expect, but the weak equivalences are the -maps which are nonequivariant homotopy equivalences. Thus, Theorem 1.2 does not suffice to prove Theorem 1.1. Although does not match our expected input for the -cobordism theorem, it does have a surprising connection to the bivariant -theory of Williams [bruce]: ###### Theorem 1.3 (LABEL:prop:coarse_equals_bivariant@ and LABEL:thm:homotopy_fixed_equals_coassembly@). There is a natural equivalence of spectra A\textupcoarseG(X)H≃A(EG×HX⟶BH) Under this equivalence, the coassembly map for bivariant -theory agrees up to homotopy with the map from fixed points to homotopy fixed points: You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
2019-08-18 09:03:50
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8177985548973083, "perplexity": 891.9528280028263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313747.38/warc/CC-MAIN-20190818083417-20190818105417-00218.warc.gz"}
https://zbmath.org/?q=an:0967.35002
## Partial differential equations in mechanics 2. The biharmonic equation, Poisson’s equation.(English)Zbl 0967.35002 Berlin: Springer. xviii, 698 p. (2000). The textbook under the above title continues the corresponding first volume [A. P. S. Selvadurai, Partial differential equations in mechanics 1. Fundamentals, Laplace’s equation, diffusion equation, wave equation, Berlin: Springer (2000; reviewed above)] and is written in the same style. It contains two new chapters, i.e.: 8. The biharmonic equation; 9. Poisson’s equation. ### MSC: 35-01 Introductory exposition (textbooks, tutorial papers, etc.) pertaining to partial differential equations 00A06 Mathematics for nonmathematicians (engineering, social sciences, etc.) 74-01 Introductory exposition (textbooks, tutorial papers, etc.) pertaining to mechanics of deformable solids 76-01 Introductory exposition (textbooks, tutorial papers, etc.) pertaining to fluid mechanics ### Keywords: biharmonic equation; Poisson’s equation Zbl 0967.35101
2022-12-03 12:52:25
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8348587155342102, "perplexity": 12781.606382377015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710931.81/warc/CC-MAIN-20221203111902-20221203141902-00551.warc.gz"}
http://tex.stackexchange.com/questions/7498/how-can-i-prevent-latex-from-breaking-words-and-wrapping-them-to-the-next-line?answertab=active
# How can I prevent LaTeX from breaking words and wrapping them to the next line? I want to have my entire document fully justified but without the words being broken up and hyphenated at the end of a line if it is too long. I have come across the `\raggedright` argument which prevents the hyphenation, but I'm not sure how to then justify the text. - ``````\usepackage[none]{hyphenat}%%%% `````` - Having both no hyphenation and text flush to both sides puts real stress on spacing. Compare: ```Quisque quis nisl eu nibh suscipit rutrum. Suspendisse potenti. Maecenas quis neque ut velit pellentesque commodo. Donec et nulla tortor.``` with ```Quisque quis nisl eu nibh suscipit rutrum. Suspendisse potenti. Maec- enas quis neque ut velit pellentesque commodo. Donec et nulla tortor.``` If you are not using Luatex, try it: good microtyography can minimise or eliminate hyphenation. Herbert's answer, `usepackage[none]{hyphenat}`, can be used together with microtypography. - using `\ttfamily` is not a good idea –  Herbert Dec 21 '10 at 11:51 @Herbert: The above was represented in HTML, not Tex, and there's only the option of monospaced fonts there. I guess proportional spacing could be counted as a crude form of microtypography. –  Charles Stewart Dec 21 '10 at 11:59 +1 for telling that flush + no hyphenation has severe drawbacks. –  Hendrik Vogt Dec 21 '10 at 12:16 did you produce the example by hand, or is there some automatic way of doing that? –  Bruno Le Floch Mar 12 '11 at 19:50 @Bruno: I used some Emacs Lisp code to set flush to column width, and put the hyphenation in by hand. –  Charles Stewart Mar 14 '11 at 7:56
2015-10-05 12:57:52
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9368322491645813, "perplexity": 2552.8634320681526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736677342.4/warc/CC-MAIN-20151001215757-00110-ip-10-137-6-227.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/binomial-theorem-related-proofs.557075/
# Binomial Theorem related proofs 1. Dec 5, 2011 ### h.shin 1. The problem statement, all variables and given/known data Let a be a fixed positive rational number. Choose(and fix) a naural number M > a. a) For any n$\in$N with n$\geq$M, show that (a^n)/(n!)$\leq$((a/M)^(n-M))*(a^M)/(M!) b)Use the previous prblem to show that, given e > 0, there exists an N$\in$$N$ such that for all n$\geq$N, (a^n)/(n!) < e 2. Relevant equations 3. The attempt at a solution I just don't really know where to start. Any hints? or suggestions? 2. Dec 5, 2011 ### HallsofIvy Start by looking at simple examples. What if, say, a= 1/2, M= 1 and n= 2? What if M= 2 and n= 2?
2018-03-23 06:12:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5789700150489807, "perplexity": 2677.4016479216743}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648178.42/warc/CC-MAIN-20180323044127-20180323064127-00250.warc.gz"}
https://www.lmfdb.org/ArtinRepresentation/?dimension=2
Results (1-50 of at least 1000) Next Galois conjugate representations are grouped into single lines. Label Dimension Conductor Defining polynomial of Artin field $G$ Ind $\chi(c)$ 2.23.3t2.b.a $2$ $23$ x3 - x2 + 1 $S_3$ $1$ $0$ 2.31.3t2.b.a $2$ $31$ x3 + x - 1 $S_3$ $1$ $0$ 2.39.4t3.a.a $2$ $3 \cdot 13$ x4 - x3 - x2 + x + 1 $D_{4}$ $1$ $0$ 2.44.3t2.b.a $2$ $2^{2} \cdot 11$ x3 - x2 + x + 1 $S_3$ $1$ $0$ 2.47.5t2.a.a 2.47.5t2.a.b $2$ $47$ x5 - 2x4 + 2x3 - x2 + 1 $D_{5}$ $1$ $0$ 2.52.6t5.b.a 2.52.6t5.b.b $2$ $2^{2} \cdot 13$ x6 - x4 - 2x3 + 2x + 1 $S_3\times C_3$ $0$ $0$ 2.55.4t3.c.a $2$ $5 \cdot 11$ x4 - x3 + 2x - 1 $D_{4}$ $1$ $0$ 2.56.4t3.b.a $2$ $2^{3} \cdot 7$ x4 - x3 + x + 1 $D_{4}$ $1$ $0$ 2.57.6t5.a.a 2.57.6t5.a.b $2$ $3 \cdot 19$ x6 - x5 + x4 - 2x3 + 4x2 - 3x + 1 $S_3\times C_3$ $0$ $0$ 2.59.3t2.a.a $2$ $59$ x3 + 2x - 1 $S_3$ $1$ $0$ 2.63.4t3.a.a $2$ $3^{2} \cdot 7$ x4 - x3 + 2x + 1 $D_{4}$ $1$ $0$ 2.68.4t3.a.a $2$ $2^{2} \cdot 17$ x4 + x2 - 2x + 1 $D_{4}$ $1$ $0$ 2.68.8t17.a.a 2.68.8t17.a.b $2$ $2^{2} \cdot 17$ x8 - 2x7 + 4x5 - 4x4 + 3x2 - 2x + 1 $C_4\wr C_2$ $0$ $0$ 2.71.7t2.a.a 2.71.7t2.a.b 2.71.7t2.a.c $2$ $71$ x7 - x6 - x5 + x4 - x3 - x2 + 2x + 1 $D_{7}$ $1$ $0$ 2.72.6t5.b.a 2.72.6t5.b.b $2$ $2^{3} \cdot 3^{2}$ x6 - 2x5 + 3x4 - 2x3 + 2x2 + 1 $S_3\times C_3$ $0$ $0$ 2.76.3t2.a.a $2$ $2^{2} \cdot 19$ x3 - 2x - 2 $S_3$ $1$ $0$ 2.77.10t6.b.a 2.77.10t6.b.b 2.77.10t6.b.c 2.77.10t6.b.d $2$ $7 \cdot 11$ x10 - 3x9 + 7x8 - 12x7 + 15x6 - 15x5 + 12x4 - 7x3 + 4x2 - 2x + 1 $D_5\times C_5$ $0$ $0$ 2.79.5t2.a.a 2.79.5t2.a.b $2$ $79$ x5 - x4 + x3 - 2x2 + 3x - 1 $D_{5}$ $1$ $0$ 2.80.4t3.a.a $2$ $2^{4} \cdot 5$ x4 - 2x3 + 2 $D_{4}$ $1$ $0$ 2.83.3t2.a.a $2$ $83$ x3 - x2 + x - 2 $S_3$ $1$ $0$ 2.84.6t5.b.a 2.84.6t5.b.b $2$ $2^{2} \cdot 3 \cdot 7$ x6 - 3x5 + 4x4 - x3 - 2x2 + x + 1 $S_3\times C_3$ $0$ $0$ 2.87.3t2.a.a $2$ $3 \cdot 29$ x3 - x2 + 2x + 1 $S_3$ $1$ $0$ 2.87.6t3.b.a $2$ $3 \cdot 29$ x6 - x5 + 4x4 - 4x3 + 5x2 - 3x + 1 $D_{6}$ $1$ $0$ 2.88.10t6.b.a 2.88.10t6.b.b 2.88.10t6.b.c 2.88.10t6.b.d $2$ $2^{3} \cdot 11$ x10 - 2x9 + x8 + 2x7 - 3x6 + 2x4 + 2x3 - x2 - 2x + 1 $D_5\times C_5$ $0$ $0$ 2.93.10t6.b.a 2.93.10t6.b.b 2.93.10t6.b.c 2.93.10t6.b.d $2$ $3 \cdot 31$ x10 + 2x8 - 3x7 + 3x6 - 7x5 + 8x4 - 7x3 + 7x2 - 4x + 1 $D_5\times C_5$ $0$ $0$ 2.95.4t3.c.a $2$ $5 \cdot 19$ x4 - 2x3 + 2x2 - x - 1 $D_{4}$ $1$ $0$ 2.95.8t6.a.a 2.95.8t6.a.b $2$ $5 \cdot 19$ x8 - x7 + x5 - 2x4 - x3 + 2x2 + 2x - 1 $D_{8}$ $1$ $0$ 2.99.6t5.a.a 2.99.6t5.a.b $2$ $3^{2} \cdot 11$ x6 - x4 - 2x3 + 3x2 + x + 1 $S_3\times C_3$ $0$ $0$ 2.100.10t6.b.a 2.100.10t6.b.b 2.100.10t6.b.c 2.100.10t6.b.d $2$ $2^{2} \cdot 5^{2}$ x10 - 4x9 + 9x8 - 14x7 + 15x6 - 10x5 + 3x4 + 2x3 - 2x2 + 1 $D_5\times C_5$ $0$ $0$ 2.103.5t2.a.a 2.103.5t2.a.b $2$ $103$ x5 - 2x4 + 3x3 - 3x2 + x + 1 $D_{5}$ $1$ $0$ 2.104.3t2.b.a $2$ $2^{3} \cdot 13$ x3 - x - 2 $S_3$ $1$ $0$ 2.104.6t3.a.a $2$ $2^{3} \cdot 13$ x6 + 2x4 - 2x3 + 2x2 + 1 $D_{6}$ $1$ $0$ 2.107.3t2.a.a $2$ $107$ x3 - x2 + 3x - 2 $S_3$ $1$ $0$ 2.108.3t2.b.a $2$ $2^{2} \cdot 3^{3}$ x3 - 2 $S_3$ $1$ $0$ 2.111.4t3.a.a $2$ $3 \cdot 37$ x4 - x3 - 2x2 + 3 $D_{4}$ $1$ $0$ 2.111.8t6.a.a 2.111.8t6.a.b $2$ $3 \cdot 37$ x8 - 3x7 + 3x6 - 3x5 + 5x4 - 6x3 + 6x2 - 3x + 1 $D_{8}$ $1$ $0$ 2.111.6t5.a.a 2.111.6t5.a.b $2$ $3 \cdot 37$ x6 - 3x5 + 4x4 - 2x3 - 2x2 + 2x + 1 $S_3\times C_3$ $0$ $0$ 2.112.8t17.a.a 2.112.8t17.a.b $2$ $2^{4} \cdot 7$ x8 - 3x7 + 6x6 - 8x5 + 10x4 - 9x3 + 6x2 - 2x + 1 $C_4\wr C_2$ $0$ $0$ 2.116.3t2.a.a $2$ $2^{2} \cdot 29$ x3 - x2 - 2 $S_3$ $1$ $0$ 2.116.6t3.b.a $2$ $2^{2} \cdot 29$ x6 - 2x3 + x2 + 2x + 2 $D_{6}$ $1$ $0$ 2.116.14t8.b.a 2.116.14t8.b.b 2.116.14t8.b.c 2.116.14t8.b.d 2.116.14t8.b.e 2.116.14t8.b.f $2$ $2^{2} \cdot 29$ x14 - 4x13 + 7x12 - 4x11 - 8x10 + 24x9 - 30x8 + 16x7 + 13x6 - 38x5 + 46x4 - 36x3 + 19x2 - 6x + 1 $C_7 \wr C_2$ $0$ $0$ 2.117.8t17.b.a 2.117.8t17.b.b $2$ $3^{2} \cdot 13$ x8 - 2x6 - 3x5 + 3x4 + 3x3 - 2x2 + 1 $C_4\wr C_2$ $0$ $0$ 2.119.5t2.a.a 2.119.5t2.a.b $2$ $7 \cdot 17$ x5 - x4 - x2 + 3x - 1 $D_{5}$ $1$ $0$ 2.120.8t11.c.a 2.120.8t11.c.b $2$ $2^{3} \cdot 3 \cdot 5$ x8 - 3x7 + 3x6 + x5 - 2x4 - 3x3 + 7x2 - 4x + 1 $Q_8:C_2$ $0$ $0$ 2.124.16t60.a.a 2.124.16t60.a.b 2.124.16t60.a.c 2.124.16t60.a.d $2$ $2^{2} \cdot 31$ x16 - 2x14 + x12 + 6x10 + 12x8 - 6x6 + x4 + 2x2 + 1 16T60 $0$ $0$ 2.127.5t2.a.a 2.127.5t2.a.b $2$ $127$ x5 - x4 - 2x3 + x2 + 3x - 1 $D_{5}$ $1$ $0$ 2.128.4t3.a.a $2$ $2^{7}$ x4 - 2x2 + 2 $D_{4}$ $1$ $0$ 2.129.14t8.a.a 2.129.14t8.a.b 2.129.14t8.a.c 2.129.14t8.a.d 2.129.14t8.a.e 2.129.14t8.a.f $2$ $3 \cdot 43$ x14 - 3x13 + 8x12 - 18x11 + 32x10 - 52x9 + 70x8 - 81x7 + 82x6 - 70x5 + 52x4 - 31x3 + 16x2 - 6x + 1 $C_7 \wr C_2$ $0$ $0$ 2.131.5t2.a.a 2.131.5t2.a.b $2$ $131$ x5 - x4 + 2x3 - x2 + x + 2 $D_{5}$ $1$ $0$ 2.133.6t5.a.a 2.133.6t5.a.b $2$ $7 \cdot 19$ x6 - x5 + x4 + 5x2 + 4x + 1 $S_3\times C_3$ $0$ $0$ Next
2020-10-29 00:15:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22880522906780243, "perplexity": 753.745661514122}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107902038.86/warc/CC-MAIN-20201028221148-20201029011148-00611.warc.gz"}
https://chemistry.stackexchange.com/questions/40563/predicting-the-spontaneity-of-non-isothermal-reactions/40595
# Predicting the spontaneity of non-isothermal reactions We use free energy equations -- Helmholtz or Gibbs -- to predict whether or not a reaction is spontaneous. These equations depend on constant temperature. This forum post describes a scenario where volumes of water at different temperatures are mixed. The $\Delta G$ appears to be positive. (The poster's textbook gives $\Delta G = 99.5 \, \mathrm{Cal}$, a result I've been able to duplicate.) This seems to show that $\Delta G$ doesn't predict spontaneity for non-isothermal reactions. So what does? The criterion that, at constant temperature, if a reaction is spontaneous, $\Delta G^0$ in going from pure reactants to pure products at 1 bar will be negative is just a rule of thumb which, over the years, has confused students to no end. If you start out with pure reactants and mix them, spontaneous reaction will always occur to some extent. It's just that, if $\Delta G^0$ is positive, the reaction will tend to proceed to a lesser extent when equilibrating, and if $\Delta G^0$ is negative, it will tend to proceed to a greater extent when equilibrating. So, in this sense, every reaction is spontaneous.
2020-01-25 15:23:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8441550731658936, "perplexity": 632.160144504121}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251672537.90/warc/CC-MAIN-20200125131641-20200125160641-00514.warc.gz"}
https://www.physicsforums.com/threads/function-for-and-while-in-c.543933/
# Function 'for' and 'while' in C++ • Comp Sci ## Homework Statement What is the difference between these two functions? ## The Attempt at a Solution Related Engineering and Comp Sci Homework Help News on Phys.org phinds Gold Member So, what ideas do you have? If you don't say what you know, we can't offer advice on how you might learn more. If you are just looking for someone to write out an answer for you, you are on the wrong forum. I see you have 30 posts so you surely know by now that folks here will go out of their way to help you understand something, but no one is likely to be interested in just doing your work for you. Mark44 Mentor First off, for and while are not functions - they are keywords in C, C++, and other programming languages that are based on C. o,sorry for that. I know that you can use both for and while to find the sum of two numbers, like 0-10,which is 55.But I don't know the difference between these two,and someone told me that you can use 'while' when the range of the number is not given. I don't understand that. Use your example of 0-10 adding up the digits to be 55. Do you have to rewrite your for loop to solve the same thing for say 0-15? 0-50? 0-100? Would you want to use this for loop for 0-10 as a function that accepts different values of X, where x=10 means sum up all the digits up to 10 and x=15 means sum up all the digits up to 15? Now if you solved this for a while loop. Would you nee to rewrite your while loop for different values ? 0-15, 0-50, 0-100 Would you want to use this while loop for a function that accepts different values of X? Hurkyl Staff Emeritus Gold Member But I don't know the difference between these two Can you write down an example of each? Then, can you state all of the features of each example? Then, can you State what you think each example does? You can choose freely if you want to use for or while. They are equivalent. The only difference is their syntax. #include<iostream> using namespace std; int main() { int a=0,b=1; while(b<=10){ a+=b; ++b; } cout<<a<<endl; } This will find the sum from 1 to 10.The computer will do the command in the curly brackets as long as int b obeys the condition(b<=10) #include<iostream> using namespace std; int main() {int a=0; for(b=1;b<=10;++b) a+=b; cout<<a<<endl; } this has the same function.The computer will check whether the initial value of b satisfies the condition,if it does, it will perform the command a+=b; the complete ++b)until b doesn't satisfy the condition. if both can do the same thing, then what's the need of having both 'for' and 'while'? Because for loops have syntax that makes it easy to use for situations when you know they need to run a specific number of times (like if you're going through a set-length array), while while loops are better for situations when you don't know when the sequence will need to break (like if you're reading a file). It's true that you can use both for the same cases. You can also use goto and conditionals to do the same thing as well (but don't). Keep in mind that you're working in an abstract environment by using a programming language, so there are several ways to do the same basic tasks and each has been designed to apply to specialized situations to make it easier on the programmer. I mean if you really want to strip things down to barebones, wait until you're working with assembly. oh, can you give me an example?(for 'for' and 'while').so you mean that it doesn't matter which ways I use, as long as I get the right results, yea? Pretty much, yeah. Here's an example of a for loop application, converting a decimal number to 8-bit binary and storing the result in an array of 1s and 0s (in C, though it shouldn't be much different): Code: int dec = <some integer>; int bin[8]; int i; for (i = 0; i < 8; i++) { bin[i] = dec%2; dec /= 2; } A for loop is more appropriate because you're always going to go through it 8 times, no matter what dec is. And here's a while loop reading in a file and printing the contents (also in C): Code: FILE *fp; fp = fopen(<some file>); int c = getc(fp); //initialize c as the first character in the input file while (c != EOF) { //Keep the loop going until c == EOF (end-of-file). putchar(c); //print whatever c is, as a character (c is considered an int, but it has a char representation). c = getc(fp); //set c as the next character in the file } Though that would read better as a do-while: Code: FILE *fp; fp = fopen(<some file>); int c = 0; do { c = getc(fp); putchar(c); } while (c != EOF); Which just means do the loop once before checking to see if you should. In this case, a while loop is more appropriate because the file's size isn't specified. Last edited:
2021-03-01 08:04:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3643426299095154, "perplexity": 721.8810813188535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362133.53/warc/CC-MAIN-20210301060310-20210301090310-00338.warc.gz"}
https://codegolf.meta.stackexchange.com/questions/12069/character-encoding-based-rules-with-language-without-encoding
Character encoding-based rules with language without encoding Oftentimes people will ask challenges that have rules hinging on the characters encoded by the source file (see ). For example, "Your program may not contain the letter a". But some languages are sequences of bytes that are not associated with any character encoding (e.g. machine code or a deflate stream). How should such languages be treated when interpreting this kind of restriction? • I think the default should simply be ASCII unless the OP says differently. If the source restriction actually affects non-ASCII characters, it will be up to the OP anyway, and having a "default" doesn't really make sense, as it would be unclear from the specification point of view. Just leave it to the OP to provide an explanation of the restrictions. – mbomb007 Apr 12 '17 at 13:59 Rather than propose a default encoding, I'm going to suggest an alternative route: close the challenge as unclear. If it's unclear whether or not 65 66 67 contains A, that's the fault of the challenge. Get clarification from the challenge author on what is and isn't allowed. • From this, it sounds like you're thinking the author should be restricting code points rather than representations? "Your source code cannot have 0x66 or 0x88 anywhere" or similar, right? – AdmBorkBork Apr 13 '17 at 19:34 • @AdmBorkBork Exactly. That's unambiguous and doesn't require extra specification for multiple encodings. – Mego Apr 13 '17 at 19:44 I propose that the Windows-1252 character encoding be used as the default character set for languages that don't specify otherwise. Why Windows-1252? • By far the most character encodings are ASCII-based (extensions of ASCII-7), so if we were to pick a character encoding, it should be one of those. The sooner we get EBCDIC out of the picture, the better. • Unicode is the go-to character set in these days, but which encoding? Some byte-sequences are invalid UTF-8, ditto for UTF-16 and UTF-32, so neither of these will fly. • You could pick the first 256 characters in Unicode and be done with it. This character set already exists, and it's called Latin 1, AKA ISO-8859-1. The downside is, this character set includes a whole bunch of non-printable characters that nobody uses (other than those coming from ASCII). • So, let's look for a superset of (ISO-8859-1 minus C1 control codes). Wikipedia says that the superset of this encoding is Windows-1252. It's also called "ANSI", and while the reason behind it isn't great, the naming certainly did benefit Windows-1252. It still leaves some bytes without an associated character, but there doesn't seem to be a standardized extension that fills them in. As to why it's named ANSI, i'll let @ais523 explain: Microsoft's calling of the encoding "ANSI" is a misunderstanding, rather than anything official. They basically have two sets of APIs, one for Unicode, and one for 8-bit character sets. When originally creating the terminology, they assumed that the 8-bit character sets would be ANSI-standardised, and thus used the name ANSI for them collectively, but it turned out that one of Microsoft's own became much more popular. • The comparison of hex editors on Wikipedia also suggests Windows-1252, besides CP437, as a good choice. The hex editor I use personally uses Windows-1252 as well. • HTML5 standard specifies that Windows-1252 be used whenever a web page says it's in ISO-8859-1. ISO-8859-1 / Windows-1252 is the most common encoding on the Internet after UTF-8, by a wide margin.1 Why any encoding at all? • It makes displaying the code much easier. Also, it makes character-based source restrictions relevant for the language. Which superset of ASCII is used mostly doesn't matter for the purpose of source restrictions. • As a case study, the question mentions a deflate stream, but a deflate stream isn't a programming language, so I'll assume you meant Bubblegum, which is a proper superset thereof. But Bubblegum does use a charset - it's the same one as is used by the Python 3 interpreter and it becomes relevant when the program's SHA-256 hash is 5e247c455fde7711206ebaa3ad0793114b77a6d16ed0497eff8e3bf98c6dba23. • The other language mentioned in the question is machine code (assumed x86). Sure, the instruction parser in a CPU couldn't care less how a byte within an instruction looks when you display it on the screen or which character it represents in ASCII, but there is a thing called printable machine code - that is, the subset of machine code restricted to printable ASCII. It becomes relevant when you try to pull off arbitrary code execution on a remote web server. • I should note that Microsoft's calling of the encoding "ANSI" is a misunderstanding, rather than anything official. They basically have two sets of APIs, one for Unicode, and one for 8-bit character sets. When originally creating the terminology, they assumed that the 8-bit character sets would be ANSI-standardised, and thus used the name ANSI for them collectively, but it turned out that one of Microsoft's own became much more popular. – user62131 Apr 12 '17 at 12:58 • @ais523 thanks for the information. Edited. – John Dvorak Apr 12 '17 at 13:28 • I'm just going to throw a Mondrainian wrench into the works. Where does Piet fit, then? – Draco18s no longer trusts SE Apr 25 '17 at 19:33 • @draco good point. You could argue that Piet does have a character set defined, though - it's the codels. The character encoding is more unusual than for most languages. Will this argument pass? :-D – John Dvorak Apr 26 '17 at 0:29
2020-10-01 16:56:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49864858388900757, "perplexity": 1885.8676251171769}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402131777.95/warc/CC-MAIN-20201001143636-20201001173636-00162.warc.gz"}
http://mathoverflow.net/questions/99506/blackbox-theorems?answertab=oldest
Blackbox Theorems [closed] By a blackbox theorem I mean a theorem that is often applied but whose proof is understood in detail by relatively few of those who use it. A prototypical example is the Classification of Finite Simple Groups (assuming the proof is complete). I think very few people really know the nuts and bolts of the proof but it is widely applied in many areas of mathematics. I would prefer not to include as a blackbox theorem exotic counterexamples because they are not usually applied in the same sense as the Classification of Finite Simple Groups. I am curious to compile a list of such blackbox theorems with the usual CW rules of one example per answer. Obviously this is not connected to my research directly so I can understand if this gets closed. - closed as no longer relevant by Benjamin Steinberg, Bill Johnson, Felipe Voloch, Will Jagy, Asaf KaragilaSep 15 '12 at 9:38 This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center. If this question can be reworded to fit the rules in the help center, please edit the question. The simpler proofs are still at least 20 pages of fairly technical mathematics though. –  Karl Schwede Jun 13 '12 at 21:38 Domain of use is important here. Many theorems are invoked by physicists who have no idea of the actual proofs. –  Steve Huntsman Jun 13 '12 at 22:05 @Zsbán: That theorem has nice consequences, e.g., in finite geometry. But treating it as a blackbox is just laziness, since the proof is just a couple of pages of basic graduate algebra. –  Felipe Voloch Jun 15 '12 at 0:19 The classification is used by people working on permutation groups and graph theory all the time. Also they are used in profinite group theory. –  Benjamin Steinberg Jun 16 '12 at 19:21 In my mind I was hoping for things used in at least 100 papers and understood in all technical detail by fewer than 5% of people in the general area to which the theorem belongs. But it need not be this rigid. –  Benjamin Steinberg Jun 17 '12 at 3:08 Does FLT count? - Maybe the modularity theorem counts, but I wouldn't say FLT does. –  Steven Gubkin Jun 13 '12 at 21:32 Dirk, the reason FLT doesn't count is that it is almost never used. –  Joël Jun 14 '12 at 23:14 The existence of resolution of singularities in characteristic zero is certainly used by many more people than those who know the details of its proofs, especially the original one. - There are many papers by H. Hauser whose message is "You can understand Hironaka's proof!". There is even a game-theoretic interpretation. Very recommended. –  Martin Brandenburg Jun 14 '12 at 8:15 There are also now books (one by Kollar and another by Cutkosky) that aim to present the proof at a graduate student level. I think they don't prove the most general/detailed statements from Hironaka's original paper, though. –  Dan Ramras Jun 15 '12 at 0:34 Determinacy of Borel Games seems like a good example of this. - The decomposition theorem for perverse sheaves is used in many areas of mathematics, for example representation theory, while the details of the weights machinery involved in its proofs are notoriously hard. - I think the main statements of the MMP (Minimal Model Program ) in algebraic geometry qualify for this. It will even become more of a black box in the future, as people will understand better how to apply it. - Can you be a little more specific? do you refer to minimal models in algebraic topology? –  Gil Kalai Jun 14 '12 at 11:40 Fixed point theorems (such as Brouwer and Kakutani's) are very frequently invoked, specially in Econ. I am not sure how many people are familiar with the proofs. There are many nice proofs available, by the way. - I think also many people treat certain tools in homological algebra this way. For example various facts about spectral sequences and how to use them. In the spectral sequences example, I feel like many people once learned the background, and then forgot it (perhaps could reconstruct if forced). But regardless, they still know how to apply the machines in the problems relevant to them. - Always about homological algebra, I think that the whole "derived functors" story is used by many people but proofs are rarely read. –  Filippo Alberto Edoardo Jun 14 '12 at 2:27 Definitley spectral sequences come to you in black boxes first ... and perhaps they stay black boxes ;). –  Martin Brandenburg Jun 14 '12 at 9:57 Most mathematicans know that the axiom of choice is independent from ZF axioms, but I guess most non-set-theorists don't know details of the proof. - Is this theorem actually applied frequently inside or outside of set theory? –  Jan Weidner Jun 13 '12 at 21:59 Would you use AC if it contradicts ZF ? The magnitude of the independence theorem is that we use it implicitely whenever we apply AC, since it tells us that AC doesn't lead to logical contradictions that weren't already present in ZF. –  Ralph Jun 13 '12 at 22:20 @Ralph, I disagree. First, the independence of AC from ZF is not the same as the consistency of ZFC relative to ZF (indeed, the latter is very easy). Second, I'm not certain that even this is used frequently outside of set theory; when non-logicians use the axiom of choice, they are not tacitly assuming that it is consistent with ZF, they are tacitly assuming that it is part of some consistent set theory - for example, how many non-logicians know the ZF axioms off the top of their head? I think in practice the set theory that is actually used is generally some high-but-finite-order arithmetic. –  Noah Schweber Jun 13 '12 at 22:56 @Ralph: you may say the same about any axiom in any theory. –  Michal R. Przybylek Jun 13 '12 at 23:41 In fact, building off of Michal, perhaps the consistency of the axioms of powerset, replacement, and separation would be better, since these are implicitly used whenever comprehension (forming the set of all $x$ such that $P(x)$) is used, and full comprehension actually is contradictory! But I still don't feel that these are good examples. Roughly speaking, either you're Platonist - in which case mere consistency of AC isn't sufficient to justify using it - or one is interested in proving theorems from axioms, in which case "ZFC proves X" is valuable even if ZFC isn't known to be consistent. –  Noah Schweber Jun 14 '12 at 0:24 Jordan's curve theorem is used as a blackbox. This topology theorem states that a looped continuous path in the plane partitions the points of the plane, such that any continuous path going from a point in one partition to a point in the other intersects the loop. There seem to be a lot of theorems in calculus of which I don't fully understand the proof, though some of this shows my ignorance of calculus. Jordan's theorem seem to be an extreme example though. Let me list some other examples. • the existance and basic properties of the Lebesgue measure and infinite product measures • the fact that a Wiener process is almost surely everywhere continuous (mentioned below as a separate answer by weakstar) • the fact that the roots of a complex polynomial (or the eigenvalues of a complex matrix) are continuous in the coefficients (though I should learn the proof for this because the more precise statements on how well conditioned the roots are on the coefficients is useful) • the spectral theorem about linear maps on a possibly infinite-dimensional Hilbert-space • the proof that a convex function (from reals to reals) is always continuous everywhere and has a left and right derivative everywhere (Update: okay, remove this last one because Ian Morris gave a simple proof below. I seemed to remember it was more difficult than that. Thanks, Ian.) • Rademacher's theorem: every Lipschitz function from an open subset of $\mathbb{R}^m$ to $\mathbb{R}^n$ is differentiable almost everywhere. (Added on Paul Siegel's suggestion. For some reason I haven't heared of this theorem before, but it sure sounds useful.) • Lebesgue's criterium which claims that a bounded function from reals to reals is Riemann-integrable iff it's continuous almost everywhere. (The proof is elementary and doesn't require any ideas, but it's laborous.) - This example is not really what I want because all basic algebraic topology books do it. –  Benjamin Steinberg Jun 13 '12 at 22:59 Most of those are genuinely laborious proofs, but the one about convex functions can be done in a few lines. A convex function clearly has at most two intervals of monotonicity, which implies that the left and right limits at each point exist. If they aren't the same for some point then we can find a chord between two points of the graph close to the discontinuity which passes below the graph (on the left if the jump is downwards, or to the right if it is upwards) contradicting convexity. Differentiability is obtained by showing that (f(x+r)−f(x))/r is monotone in r. –  Ian Morris Jun 14 '12 at 13:12 Perhaps you could replace your fifth example with Rademacher's theorem: every Lipschitz function from an open subset of $\mathbb{R}^n$ to $\mathbb{R}^m$ is differentiable almost everywhere. This is a more serious result which people use all the time, and I'm not sure everyone really knows the proof (though maybe I should speak for myself). –  Paul Siegel Jun 14 '12 at 23:39 Saharon Shelah has a series of results he actually calls "black boxes," and uses accordingly (see his paper, "Black Boxes," http://arxiv.org/abs/0812.0656); my understanding is that these are Diamond-like theorems that are provable in ZFC. (Diamond, for clarification, is a sort of guessing principle: it asserts that there exists a single sequence $(A_\alpha)_{\alpha\in\omega_1}$ such that $A_\alpha\subseteq\alpha$ such that, for any $A\subseteq \omega_1$, the set $$\lbrace \alpha: A_\alpha=A\cap\alpha\rbrace$$ is "large" (specifically, stationary - intersects every closed unbounded subset of $\omega_1$). This principle is not provable in ZFC; it follows from $V=L$ and implies $CH$, but both of these implications are strict. My understanding, which is quite limited, is that Diamond is used in constructions of $\omega_1$-sized structures where one needs to "guess correctly" stationarily often, and that Shelah developed the black boxes to perform many of these same constructions in ZFC alone.) - Recognizing hamiltonian graphs is NP-complete. (A hamiltonian graph is a graph that has a cycle passing through every node.) Everyone likes to use this theorem for proving other NP-completeness proofs, but few people would know an actual proof. Even the simplest proof is somewhat messy. The theorem that 3-colorable graphs are NP-complete is similar. - Probably the unsolvability of Hilbert's 10th goes here as well. –  Benjamin Steinberg Jun 14 '12 at 0:30 How is the proof of the Poincare' Conjecture (in all dimensions) not yet anywhere on this list? Edit: in light of the comments below, this answer is now being upgraded to the proof of the Geometrization Conjecture (which implies the Poincare' Conjecture, among other things). - I think the proof of the Geometrization conjecture is a better answer since it is more applicable. –  Benjamin Steinberg Jun 13 '12 at 23:24 Faltings' Theorem, to the effect that a curve of genus greater than 1 over the rationals has only finitely many rational points, is often invoked, I suspect often by people who haven't gone through a proof in detail. - I think the solution to Hilberts 5th problem is an example. For a while Gromov's polynomial growth theorem was an example because the proof invoked Hilberts 5th. - This is a good example (I learned Gromov's proof as a student, but not Montgomery-Zippin/Gleason), but I'm not sure how many applications it's had. Recently, though, Green and Tao have had to generalize Montgomery-Zippin for applications, but they've had to delve into the details of the proof, so maybe the situation will be rectified. –  Ian Agol Jun 14 '12 at 17:27 Most mathematicians can recite the construction of a Vitali set and state that the axiom of choice is needed. Very few of them would know to describe the proof that the axiom of choice is really needed, i.e. Solovay's model (or even the Feferman-Levy in which every set is Borel). - Deligne's Theorem, found at Wikipedia under the heading of Weil conjectures, which is the Riemann Hypothesis for zeta-functions of algebraic varieties over finite fields, is often applied to estimate exponential sums in Number Theory, I suspect often by people (like me) who haven't gone through a proof in detail. - You can add to that all the étale cohomology machinery. –  Felipe Voloch Jun 14 '12 at 0:13 Existence and uniqueness of invariant Haar measure on a locally compact topological group. It is used in harmonic analysis and number theory. It is not so difficult a result to state but a proof is not so commonly seen in books. The measure allows one to define an integral on the group and do analysis. - This is pretty easy to avoid in practice. eg, Haar measure on a manifold is easily constructed using invariant differential forms. Similarly, differential forms lift measure from $\mathbb Q_p$ to $p$-adic groups. Adeles are a little trickier (eg, naive choices on $G_m(A)$ yield the zero measure). –  Ben Wieland Jun 14 '12 at 1:13 I think that probably most people in harmonic analysis more or less know how it works (as compared to the classification of finite simple groups). –  Benjamin Steinberg Jun 14 '12 at 2:11 Actually, the proof is quite widely available. –  Felix Goldberg Jun 15 '12 at 1:28 What is not terribly well known (or exposited in very many books) is the constructive proof of existence and uniqueness of Haar measure that does not use the axiom of choice. While I imagine the vast majority of people who make use of Haar measure either don't care about the axiom of choice or have nicer constructions as Ben Wieland suggests, it is at very least an interesting curiosity that the axiom of choice is not needed at all, since the usual proof one sees relies so crucially on it. –  Evan Jenkins Jun 15 '12 at 17:44 @EvanJenkins: do you have a reference for the non-AC proof? When studying Haar measure construction I found a lot of texts doing only the compact case, and one text with an AC proof of the locally compact case. –  Emilio Pisanty Jun 28 '12 at 11:56 I've got to put in 2c for ergodic theory: the Multiplicative Ergodic Theorem is widely quoted, but locating a complete proof is hard. - The graph minor theorem and the graph structure theorem are two results which are invoked quite often in combinatorics/graph theory. Much like the classification of finite simple groups they are excellent ways of sweeping hundreds of pages of technical proofs under just a few sentences. - Great example... –  Benjamin Steinberg Jun 14 '12 at 0:30 The existence of Brownian Motion. - @Zsbán: Continuity of BM is part of the standard definition, so proving that is the same as proving that it exists. However, the proof that BM is almost-surely nowhere differentiable is probably less well known. –  George Lowther Jun 14 '12 at 22:03 The proof of continuity usually follows from the "Kolmogorov Criterion": If there exists strictly positive constants $\varepsilon$, $p$ and $C$ such that $$\mathbb{E}|X_t - X_s|^p \leq C|t-s|^{1+\varepsilon}$$ then almost surely $X$ has a modification which has $\alpha$-Hölder continuous paths for any $\alpha \in (0,\frac{\varepsilon}{p})$ –  Felipe Olmos Jun 15 '12 at 23:54 Low dimensional topology is unfortunately full of such theorems. Maybe the archetypal example is the Kirby Theorem, which states that surgery on two framed links in S3 give diffeomorphic 3-manifolds if and only if the links are related by a specific set of combinatorial moves. The result is used routinely, in order to prove that invariants of framed links descend to topological invariants of the manifold (e.g. Reshetikhin-Turaev invariants). All known proofs of Kirby's Theorem are a nightmare (see this MO question). You need to use some heavy tool (Cerf's Theorem/ explicit presentation of Mapping Class Groups) in order to show that some expansion of the space of Morse functions (a Frechet space) is path connected. This is outside the toolbox of most topologists. I would be surprised if there were 20 people in the world who have read through and understood the details of the proof of Kirby's Theorem. Yet it's routinely used. There are more mild examples too. The proof that PL 3-manifolds can be smoothed, and that the resulting smooth structure is unique up to isotopy (the exact statement is in Kirby-Seibenmann), is used routinely as though it were obvious, but it is actually quite a hard theorem which is not covered in any of the standard textbooks (Thurston's "3-Manifolds" being an exception). See Lurie's 2009 notes. - Freedman's theorem "Casson handles are handles" is also used as a black box by many people. Once this is known, standard arguments from higher dimensions can be pushed down to 4 dimensions to prove h-cobordism and Poincare. Hopefully this will be rectified next summer when an extended workshop will go over the proof (I think at Bonn). –  Ian Agol Jun 14 '12 at 17:20 I confess that I have no looked at the proof of the Kirby calculus theorem either recently. But I personally think that the difficulty of the Thom transversality theorem and Cerf theory are overplayed. Is the Reidemeister move theorem for smooth knots a difficult theorem? It's the same sort of thing. Yes, there are a lot of details if you want to be very rigorous, but the lemmas all have natural statements. For instance, you can prove Thom transversality in the setting of a finite-dimensional vector space of polynomial functions, using algebraic geometry. –  Greg Kuperberg Jun 19 '12 at 6:11 Nagata embedding is another black box - its statement is very simple and useful, but its proof is hard. By combining Nagata embedding with Hironaka's resolution of singularities (mentioned in another answer), you get "any smooth variety over a characteristic zero field admits an open immersion into a proper smooth variety", which is concise enough that people often use it without citing the authors' hard work. - I think the Uniformization theorem is an example of blackbox theorem : any simply connected Riemann surface is conformally equivalent to either the open unit disk, the complex plane or the Riemann sphere. - The standard proof of the Uniformization theorem with the Green's function, while rather involved, shouldn't really surpass the ability of most who come across it. There also exists a short and elegant proof that uses certain rather more advanced tools: the Mayer-Vietoris sequence and the celebrated Newlander-Nirenberg theorem. But NN for surfaces is just the existence of isothermal coordinates, which is much simpler to prove. This proof can be found in Demailly's "Complex Analytic and Differential Geometry" (available at www-fourier.ujf-grenoble.fr/~demailly/manuscripts/agbook.pdf). –  HeWhoHungers Jun 16 '12 at 1:24 @HeWhoHungers: I agree that the proof in Demailly's book is marvellous and elegant, but it is neither easy or short in any sense. I talked about exactly this proof in a lecture course on Teichmüller theory some years ago, addressing an audience of very bright graduate students. I needed 3 or 4 hours to communicate the proof and I remember it to be a tour de force, both for me and the audience. Even if you take the advanced tools for granted (as I did), the details (many of which are thrown under the carpet in the book) are very, very subtle. –  Johannes Ebert Jun 26 '12 at 17:40 The ration #{people who quote the theorem on a daily basis} / #{people who know the details of the proof offhand} is very high, so it is a perfect example of a blackbox theorem. –  Johannes Ebert Jun 26 '12 at 17:47 The existence of Neron models. This gets used all the time when one talks of abelian varieties, but familiarity with the proof is almost never needed. - Embedding theorems for abelian categories (Freyd, Mitchell, Lubkin, ...) seem to qualify. - Plus the Gabriel-Quillen-Laumon theorem, embedding an exact category into an abelian one. –  Matthias Künzer Jun 14 '12 at 15:58 The Borel isomorphism theorem says that any two Polish (complete and separable metrizable) spaces endowed with their Borel $\sigma$-algebra are isomorphic as measurable spaces if and only if they have the same cardiality and this cardinality is either countable or the cardinality of the continuum. The result is extremely useful and widely applied in probability theory. It allows one to prove many results for general Polish spaces by proving them for the real line or the unit interval. The proof is actually not that hard, but somewhat messy and gives little useable insight for those not working in descriptive set theory. - When learning algebraic geometry and in particular the notion of smooth varieties, you will probably stumble upon the following Theorems: • Regular local rings are factorial. • Localizations of regular local rings are regular, too. • A local ring is regular iff its residue field has finite projective dimension (Serre). Many texts on algebraic geometry take this as a black box, quoting standard sources of commutative algebra. The reason seems to be that you don't have to understand the methods of the proof (e.g. Koszul homology) in order to apply these results. - Also, the Serre conjecture a.k.a. Quillen-Suslin theorem. –  darij grinberg Jun 14 '12 at 10:18 Doesn't Zorn's Lemma count? Of course in ZF this is not a Theorem (rather it is undecidable), but in ZF+AC it is a real Theorem which is often mentioned without proof, especially in classes outside of mathematical logic. For example, in commutative algebra it is quoted in order to get enough maximal ideals in rings, etc. Of course it is not hard to understand the proof of AC => Zorn, but many students take this on faith. I don't know if this also applies to mathematicians. - Okay, but in ZF+Zorn's lemma it's a tautology! I don't think in practice most people use specifically the fact that AC implies Zorn's lemma but just the fact that it's generally considered okay to prove results that depend on Zorn's lemma. –  Qiaochu Yuan Jun 14 '12 at 10:30 One of the seminal results in random matrix theory is that in the edge scaling limit, the largest eigenvalue of random Hermitian matrices is the Tracy-Widom distribution. The original proof by Tracy and Widom is full of so many unintuitive technical details that most people who cite it don't understand it. (Or so I've been told). - The Open Mapping Theorem (known as Banach-Schauder Theorem) is used daily by zillions of analysts. But its proof is far from trivial and is often overlooked by users. It is not just a straightforward consequence of Baire's Theorem. - I think you are overestimating the difficulty of its proof. While it is not trivial, it is something standard and covered without any problem in a functional analysis course. If the proof is overlooked, that's probably usually because of, well, oversight, not because it is too complicated to be understood by the ordinary analyst. –  Michal Kotowski Jun 14 '12 at 16:49
2015-07-02 08:09:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8513992428779602, "perplexity": 468.1134191672537}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095423.29/warc/CC-MAIN-20150627031815-00285-ip-10-179-60-89.ec2.internal.warc.gz"}
http://wavelets.org/schemes-eaw.php
Abstract The edge-avoiding wavelets are a family of second-generation wavelets constructed using a data-prediction lifting scheme. The support of these wavelets is formed based on the edge content of the image and avoids having pixels from both sides of an edge. The scheme achieves nonlinear multi-scale edge-preserving image processing at computation times which are linear in the number of image pixels. The wavelets encode, in their shape, the smoothness information of the image at every scale. Lifting Scheme The lifting scheme is an efficient implementation of the fast wavelet transform. It provides a methodology for constructing biorthogonal wavelets through a spatial domain. This makes it a well-suited framework for constructing wavelets that adapt to the spatial particularities of the data. For example, one can start with some given simple basis and performs a sequence of modifications that adapt and improve the wavelets. The scheme can be divided into three steps: split, predict, and update. The input signal data at the finest level, $a^0[n]$ (the superscripts denote the level), is divided into two disjoint sets $C$ and $F$. These sets define coarse and fine data points, respectively. The signal values restricted to these sets are denoted by $a^0_C[n]$ and $a^0_F[n]$. Next, the coarse data points $a^0_C$ are used to predict the fine $a^0_F$. This prediction operator is denoted by $\mathcal{P} : C \mapsto F$. The prediction errors are the wavelet or detail coefficients of the wavelet transform of the next level. The coarse variables are usually not taken as the next-level approximation coefficients. The lifting scheme makes sure that the overall sum of the approximation coefficients is preserved at all levels. This is achieved by an additional update operator $\mathcal{U} : F \mapsto C$ that introduces averaging with the fine variables. These new variables are the approximation coefficients of the next level of the wavelet transform. The following levels are computed recursively by repeating these three steps over the approximation coefficients. The inverse transformation is obtained by applying these steps in the reverse order. To make this construction more concrete, a particular example is described. Consider a 1-D image signal $a_0[n]$ and the splitting step that takes the odd-indexed pixels as the coarse variables and the even-indexed as the fine. Every even-indexed pixel is predicted by its two odd-indexed neighbors using a simple linear interpolation formula $$\mathcal{P}\left(a^0_C\right)[n] = \left( a^0_C[n-1] + a^0_C[n+1] \right) / 2 \text{.}$$ Next, by choosing the following update operator $$\mathcal{U}\left(d^1\right)[n] = \left( d^1[n-1] + d^1[n+1] \right) / 4 \text{,}$$ the approximation average is preserved throughout the different levels. This construction corresponds to the well-known CDF 5/3 biorthogonal wavelets. Weighted Wavelets The scaling and wavelet functions are constructed based on the content of the input data. Instead of using data-independent regression formulae, posteriori influence functions are used based on the similarity between the predicted pixel and its neighboring coarse variables. More specifically, the edge-stopping function is used to define the following prediction weights $$w^j_n[m] = \left( \left| a^j[n] - a^j[n] \right|^\alpha + \epsilon \right)^{-1} \text{,}$$ where $\alpha$ is between 0.8 and 1.2 and $\epsilon = 10^{-5}$, for images with pixels values ranging from zero to one. A two-dimensional weighted prediction based on the CDF 5/3 wavelet transform, applied along each axis separately, is described here. Instead of using an even average of the two coarse variables, the following robust average is defined $$\mathcal{P}\left(a^j_C\right)[x,y] = \frac{ w^j_{x,y}[x-1,y] a^j_C[x-1,y] + w^j_{x,y}[x+1,y] a^j_C[x+1,y] }{ w^j_{x,y}[x-1,y] + w^j_{x,y}[x+1,y] } \text{.}$$ The update operator $\mathcal{U}$ is designed to smooth the next level approximation variables $a^{j+1}_C$ when possible and a robust smoothing is used to define $$\mathcal{U}\left(d^{j+1}\right)[x,y] = \frac{ w^j_{x,y}[x-1,y] d^{j+1}[x-1,y] + w^j_{x,y}[x+1,y] d^{j+1}[x+1,y] }{ 2\left( w^j_{x,y}[x-1,y] + w^j_{x,y}[x+1,y] \right) } \text{.}$$ The analog steps are repeated along the y-image axis. Note that uniform weights, obtained by $\alpha = 0$, produce a separable two-dimensional CDF 5/3 wavelet transform. References
2018-09-21 19:29:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8081088066101074, "perplexity": 414.6460488531418}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157503.43/warc/CC-MAIN-20180921190509-20180921210909-00479.warc.gz"}
http://mathhelpforum.com/advanced-applied-math/26071-linear-programming-markov-chain.html
# Thread: Linear Programming/ Markov Chain 1. ## Linear Programming/ Markov Chain We want to minimize $\bold{c} \bold{x}$ subject to $\bold{A} \bold{x} = \bold{b}, \ \bold{x} \geq \bold{0}$. $\bold{A}$ is an $m \times n$ matrix of fixed constants, $\bold{c} = (c_1, \ldots, c_n)$, $\bold{b} = (b_1, \ldots, b_m)$ (both are vectors of fixed constants) and $\bold{x} = (x_{1}, \ldots, x_n)$ is the $n$-vector of nonnegative values that is to be chosen to minimize $\bold{c} \bold{x} \equiv \sum_{i=1}^{n} c_{i} x_{i}$. So if $n > m$, then the optimal $\bold{x}$ can always be chosen to have $n-m$ components equal to $0$. So in a Markov chain, suppose that the algorithm is at the $j$th best extreme point, then after the next pivot the resulting extreme point is equally likely to be any of the $j-1$ best. We want to find the expected number of transitions needed to go from state $i$ to state $1$, or $E[T_i]$. Then why does $E[T_i] = 1 + \frac{1}{i-1} \sum_{j=1}^{i-1} E[T_j]$? This is only for the initial transition right? They probably conditioned on some variable, but I am not seeing it. Ultimately $E[T_i] = \sum_{j=1}^{i-1} \frac{1}{j}$. Source: An Introduction to Probability Models by Sheldon Ross 2. I guess what I am saying is why they even did that in the first place? 3. Originally Posted by shilz222 So in a Markov chain, suppose that the algorithm is at the $j$th best extreme point, then after the next pivot the resulting extreme point is equally likely to be any of the $j-1$ best. We want to find the expected number of transitions needed to go from state $i$ to state $1$, or $E[T_i]$. Then why does $E[T_i] = 1 + \frac{1}{i-1} \sum_{j=1}^{i-1} E[T_j]$? This is only for the initial transition right? They probably conditioned on some variable, but I am not seeing it. If i>1 then after 1 transition it is in state 1, .. , i-1 with probability 1/(i-1). So: $ E(T_i)= 1 + \frac{1}{i-1} E(T_{1})+ \frac{1}{i-2} E(T_{2})+..+\frac{1}{i-1} E(T_{i-1})=1 + \frac{1}{i-1} \sum_{j=1}^{i-1} E[T_j]$ RonL
2017-02-22 00:21:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 27, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9155839085578918, "perplexity": 178.80708812433596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00073-ip-10-171-10-108.ec2.internal.warc.gz"}
http://www.maplesoft.com/support/help/Maple/view.aspx?path=numtheory/lambda
Carmichael's lambda function - Maple Help Home : Support : Online Help : Mathematics : Group Theory : Number Theory : numtheory/lambda numtheory[lambda] - Carmichael's lambda function Calling Sequence lambda(n) Parameters n - integer Description • The size of the largest cyclic group generated by ${g}^{i}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathbf{mod}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}n$ is given by lambda(n). • Carmichael's theorem states that ${a}^{\mathrm{\lambda }\left(n\right)}=1\mathbf{mod}n$ if $\mathrm{gcd}\left(a,n\right)=1$. • The command with(numtheory,lambda) allows the use of the abbreviated form of this command. Examples > $\mathrm{with}\left(\mathrm{numtheory}\right):$ > $\mathrm{λ}\left(13\right)$ ${12}$ (1) > $\mathrm{λ}\left(200\right)$ ${20}$ (2) > $\mathrm{λ}\left(-105\right)$ ${12}$ (3)
2016-02-09 03:27:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9919582605361938, "perplexity": 1838.544999713598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701156448.92/warc/CC-MAIN-20160205193916-00032-ip-10-236-182-209.ec2.internal.warc.gz"}
https://brandoncoya.wordpress.com/2015/03/16/is-math-the-same-on-other-planets/
## Is Math the same on other planets? I went to an amazing wedding this pi day for my cousin Jonathan and his now wife Kd where the following question was posed on the dance floor by Kd’s friend Jade, “Would math be the same on Jupiter?” I don’t know if it was the occasion, the alcohol, or the dancing, but I was inspired to finally write again after getting some moderate research done over the quarter. The real gist of the question is whether or not math is something subjective which would be completely different on some alien world. Her answer to the original question was a big “no,” that is, that math would be different on Jupiter and hence is subjective like all other things. I agreed with Jade that everything is subjective, but said math is the only true exception! This is what makes math the coolest. I’ll try to explain why math is the same everywhere as best as I can. I first need to mention science because a lot of people mistakenly believe that math and science are heavily intertwined. In the past this was true because scientists often came up with math accidentally while trying to explain things, but in modern times math is so specialized that only mathematicians come up with the abstract nonsense. Sometimes scientists will use modern math but typically old math is sufficient so they rarely come up with new ideas. Their relationship is better described as parasitic now. Science the parasite uses math everywhere, but math can exist on it’s own without science. Scientists observe things, make guesses, and try to show that their guesses are correct in the best way possible. The guesses and theories they make can be heavily influenced by location (but some physical truths are universal). For example, in X amount of years which I can’t be bothered to Google, there will be no stars in our sky because they will all be too far away to observe. A scientist of that time would have no ability to observe celestial bodies and might never figure out what we’ve figured out because of that. Yes, the truth is the same either way, but a scientist cannot figure out that truth without something to study. Math is a different beast. Math is not done through experiments first of all, we have no fancy labs to work in, we just sit there and think for a while. Then we jot something down, cross it out because it was the dumbest thing a human has ever written down, and keep thinking. Math is all about thinking and proving your thoughts with logical rules. Proofs are just collections of statements which start at some assumed statement and end at a conclusion. In between those two things you are only allowed to say objectively true facts to get to the end of the proof. You cannot use things that you believe are true, only things that are known to be true. For example I proved in another post that there were an infinite number of prime numbers. See here for details. To prove the statement you have to assume several things, like that we have a number system, and that there is a definition of a prime number along with some more subtle assumptions that go way too deep to mention. I use these facts to eventually conclude that the statement was true. This leads me to the next big point, assumptions. It was long ago realized in math that you have to start somewhere. The late 1800’s saw a surge in Foundational Mathematics. People wanted to redo all of math starting from the bare minimum. The problem is that you cannot define everything because that would require an infinite amount of words. For example, if you go to a dictionary and look up a word, then you look up the words in that definition, then keep doing the same with each of those words and so on, eventually you are going to end up going in circles. Not every word can be defined by other words. Math is similar. If I tell you that a “Set” in math is defined to be a collection of elements, you might ask, well what does a collection mean? What does an element mean? And I’ll tell you that a collection is just a grouping and an element is a thing. Then you ask what is a grouping or what is a thing? And it never ends so you simply accept the intuitive idea of what it is supposed to mean! How definitions interact with each other is what really matters! In this example, the fact that a Set consists of Elements is what matters. What we do in math is start from some beginning definitions like this, assumptions, or axioms and see what pops out when natural questions are raised. Here is a link to David Hilbert’s axioms of Geometry. Axiom is just another word for foundational assumption. It is the most basic thing you can possibly assume because it doesn’t follow from a different assumption. All the things about geometric shapes that you know and love can be proven from the starting points in the above link and if I take these assumptions to Jupiter, all my proofs will still be true. Universal truths simply pop out of assumptions and our job is to find them. If the universe ends, all the facts about shapes will still be true starting from these axioms regardless of whether or not someone is alive to say so. The shapes don’t even have to exist for the proofs to be true. There is no such thing as infinity in our universe but we still have tons of theorems about infinity that are true regardless. Now the cool part is that you and I can start from different axioms and get different but also true statements. With the above axioms and a lot of free time you can prove the Pythagorean Theorem as it is commonly known ($a^2+b^2 = c^2$) for the sides of a right triangle. If you change one axiom, the Parallel axiom, to say that parallel lines are allowed to cross (like on a sphere), you get a completely different version of geometry with a different Pythagorean Theorem and they don’t contradict each other because they started from different assumptions. This kind of stuff also allows new math to be created from minimal things. We don’t need a universe or observations to make math. We just assume some arbitrary stuff and see what happens. The assumptions don’t have to go along with physical facts, they can be whatever so long as they don’t contradict each other and are foundational. Obviously assuming some stuff and not others makes for more interesting questions and answers. You might exclaim here that I was incorrect! An alien could come up with different theorems because they used some other axioms, I even admitted it! Yes that is absolutely true, however it doesn’t make their math different from our math. Our math is still true on their planet and theirs on ours. If an alien starts with the same assumptions, they cannot contradict what we’ve figured out. They will get the exact same results. Getting different results from different axioms is totally fine and doesn’t make our maths any different in the sense that people usually think about. One plus one is not suddenly going to equal three anywhere in the universe. “It does not matter if we call the things chairs, tables and beer mugs or points, lines and planes.” – David Hilbert when referring to Geometry.
2017-07-24 04:32:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6020677089691162, "perplexity": 359.88936019435215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424721.74/warc/CC-MAIN-20170724042246-20170724062246-00410.warc.gz"}
https://regularize.wordpress.com/tag/compressed-sensing/
Let ${\Omega}$ be a compact subset of ${{\mathbb R}^d}$ and consider the space ${C(\Omega)}$ of continuous functions ${f:\Omega\rightarrow {\mathbb R}}$ with the usual supremum norm. The Riesz Representation Theorem states that the dual space of ${C(\Omega)}$ is in this case the set of all Radon measures, denoted by ${\mathfrak{M}(\Omega)}$ and the canonical duality pairing is given by $\displaystyle \langle\mu,f\rangle = \mu(f) = \int_\Omega fd\mu.$ We can equip ${\mathfrak{M}(\Omega)}$ with the usual notion of weak* convergence which read as $\displaystyle \mu_n\rightharpoonup^* \mu\ \iff\ \text{for every}\ f:\ \mu_n(f)\rightarrow\mu(f).$ We call a measure ${\mu}$ positive if ${f\geq 0}$ implies that ${\mu(f)\geq 0}$. If a positive measure satisfies ${\mu(1)=1}$ (i.e. it integrates the constant function with unit value to one), we call it a probability measure and we denote with ${\Delta\subset \mathfrak{M}(\Omega)}$ the set of all probability measures. Example 1 Every non-negative integrable function ${\phi:\Omega\rightarrow{\mathbb R}}$ with ${\int_\Omega \phi(x)dx}$ induces a probability measure via $\displaystyle f\mapsto \int_\Omega f(x)\phi(x)dx.$ Quite different probability measures are the ${\delta}$-measures: For every ${x\in\Omega}$ there is the ${\delta}$-measure at this point, defined by $\displaystyle \delta_x(f) = f(x).$ In some sense, the set ${\Delta}$ of probability measure is the generalization of the standard simplex in ${{\mathbb R}^n}$ to infinite dimensions (in fact uncountably many dimensions): The ${\delta}$-measures are the extreme points of ${\Delta}$ and since the set ${\Delta}$ is compact in the weak* topology, the Krein-Milman Theorem states that ${\Delta}$ is the weak*-closure of the set of convex combinations of the ${\delta}$-measures – similarly as the standard simplex in ${{\mathbb R}^n}$ is the convex combination of the canonical basis vectors of ${{\mathbb R}^n}$. Remark 1 If we drop the positivity assumption and form the set $\displaystyle O = \{\mu\in\mathfrak{M}(\Omega)\ :\ |f|\leq 1\implies |\mu(f)|\leq 1\}$ we have the ${O}$ is the set of convex combinations of the measures ${\pm\delta_x}$ (${x\in\Omega}$). Hence, ${O}$ resembles the hyper-octahedron (aka cross polytope or ${\ell^1}$-ball). I’ve taken the above (with almost similar notation) from the book “ A Course in Convexity” by Alexander Barvinok. I was curious to find (in Chapter III, Section 9) something which reads as a nice glimpse on semi-continuous compressed sensing: Proposition 9.4 reads as follows Proposition 1 Let ${g,f_1,\dots,f_m\in C(\Omega)}$, ${b\in{\mathbb R}^m}$ and suppose that the subset ${B}$ of ${\Delta}$ consisting of the probability measures ${\mu}$ such that for ${i=1,\dots,m}$ $\displaystyle \int f_id\mu = b_i$ is not empty. Then there exists ${\mu^+,\mu^-\in B}$ such that 1. ${\mu^+}$ and ${\mu^-}$ are convex combinations of at most ${m+1}$ ${\delta}$-measures, and 2. it holds that for all ${\mu\in B}$ we have $\displaystyle \mu^-(g)\leq \mu(g)\leq \mu^+(g).$ In terms of compressed sensing this says: Among all probability measures which comply with the data ${b}$ measured by ${m}$ linear measurements, there are two extremal ones which consists of ${m+1}$ ${\delta}$-measures. Note that something similar to “support-pursuit” does not work here: The minimization problem ${\min_{\mu\in B, \mu(f_i)=b_i}\|\mu\|_{\mathfrak{M}}}$ does not make much sense, since ${\|\mu\|_{\mathfrak{M}}=1}$ for all ${\mu\in B}$. Today I report on two things I came across here at ISMP: • The first is a talk by Russell Luke on Constraint qualifications for nonconvex feasibility problems. Luke treated the NP-hard problem of sparsest solutions of linear systems. In fact he did not tackle this problem but the problem to find an ${s}$-sparse solution of an ${m\times n}$ system of equations. He formulated this as a feasibility-problem (well, Heinz Bauschke was a collaborator) as follows: With the usual malpractice let us denote by ${\|x\|_0}$ the number of non-zero entries of ${x\in{\mathbb R}^n}$. Then the problem of finding an ${s}$-sparse solution to ${Ax=b}$ is: $\displaystyle \text{Find}\ x\ \text{in}\ \{\|x\|_0\leq s\}\cap\{Ax=b\}.$ In other words: find a feasible point, i.e. a point which lies in the intersection of the two sets. Well, most often feasibility problems involve convex sets but here, the first one given by this “${0}$-norm” is definitely not convex. One of the simplest algorithms for the convex feasibility problem is to alternatingly project onto both sets. This algorithm dates back to von Neumann and has been analyzed in great detail. To make this method work for non-convex sets one only needs to know how to project onto both sets. For the case of the equality constraint ${Ax=b}$ one can use numerical linear algebra to obtain the projection. The non-convex constraint on the number of non-zero entries is in fact even easier: For ${x\in{\mathbb R}^n}$ the projection onto ${\{\|x\|_0\leq s\}}$ consists of just keeping the ${s}$ largest entries of ${x}$ while setting the others to zero (known as the “best ${s}$-term approximation”). However, the theory breaks down in the case of non-convex sets. Russell treated problem in several papers (have a look at his publication page) and in the talk he focused on the problem of constraint qualification, i.e. what kind of regularity has to be imposed on the intersection of the two sets. He could shows that (local) linear convergence of the algorithm (which is observed numerically) can indeed be justified theoretically. One point which is still open is the phenomenon that the method seems to be convergent regardless of the initialization and that (even more surprisingly) that the limit point seems to be independent of the starting point (and also seems to be robust with respect to overestimating the sparsity ${s}$). I wondered if his results are robust with respect to inexact projections. For larger problems the projection onto the equality constraint ${Ax=b}$ are computationally expensive. For example it would be interesting to see what happens if one approximates the projection with a truncated CG-iteration as Andreas, Marc and I did in our paper on subgradient methods for Basis Pursuit. • Joel Tropp reported on his paper Sharp recovery bounds for convex deconvolution, with applications together with Michael McCoy. However, in his title he used demixing instead of deconvolution (which, I think, is more appropriate and leads to less confusion). With “demixing” they mean the following: Suppose you have two signals ${x_0}$ and ${y_0}$ of which you observe only the superposition of ${x_0}$ and a unitarily transformed ${y_0}$, i.e. for a unitary matrix ${U}$ you observe $\displaystyle z_0 = x_0 + Uy_0.$ Of course, without further assumptions there is no way to recover ${x_0}$ and ${y_0}$ from the knowledge of ${z_0}$ and ${U}$. As one motivation he used the assumption that both ${x_0}$ and ${y_0}$ are sparse. After the big bang of compressed sensing nobody wonders that one turns to convex optimization with ${\ell^1}$-norms in the following manner: $\displaystyle \min_{x,y} \|x\|_1 + \lambda\|y\|_1 \ \text{such that}\ x + Uy = z_0. \ \ \ \ \ (1)$ This looks a lot like sparse approximation: Eliminating ${x}$ one obtains the unconstraint problem \begin{equation*} \min_y \|z_0-Uy\|_1 + \lambda \|y\|_1. \end{equation*} Phrased differently, this problem aims at finding an approximate sparse solution of ${Uy=z_0}$ such that the residual (could also say “noise”) ${z_0-Uy=x}$ is also sparse. This differs from the common Basis Pursuit Denoising (BPDN) by the structure function for the residual (which is the squared ${2}$-norm). This is due to the fact that in BPDN one usually assumes Gaussian noise which naturally lead to the squared ${2}$-norm. Well, one man’s noise is the other man’s signal, as we see here. Tropp and McCoy obtained very sharp thresholds on the sparsity of ${x_0}$ and ${y_0}$ which allow for exact recovery of both of them by solving (1). One thing which makes their analysis simpler is the following reformulation: The treated the related problem \begin{equation*} \min_{x,y} \|x\|_1 \text{such that} \|y\|_1\leq\alpha, x+Uy=z_0 \end{equation*} (which I would call this the Ivanov version of the Tikhonov-problem (1)). This allows for precise exploitation of prior knowledge by assuming that the number ${\alpha_0 = \|y_0\|_1}$ is known. First I wondered if this reformulation was responsible for their unusual sharp results (sharper the results for exact recovery by BDPN), but I think it’s not. I think this is due to the fact that they have this strong assumption on the “residual”, namely that it is sparse. This can be formulated with the help of ${1}$-norm (which is “non-smooth”) in contrast to the smooth ${2}$-norm which is what one gets as prior for Gaussian noise. Moreover, McCoy and Tropp generalized their result to the case in which the structure of ${x_0}$ and ${y_0}$ is formulated by two functionals ${f}$ and ${g}$, respectively. Assuming a kind of non-smoothness of ${f}$ and ${g}$ the obtain the same kind of results and especially matrix decomposition problems are covered. In this post I just collect a few papers that caught my attention in the last moth. I begin with Estimating Unknown Sparsity in Compressed Sensing by Miles E. Lopes. The abstract reads Within the framework of compressed sensing, many theoretical guarantees for signal reconstruction require that the number of linear measurements ${n}$ exceed the sparsity ${\|x\|_0}$ of the unknown signal ${x\in\mathbb{R}^p}$. However, if the sparsity ${\|x\|_0}$ is unknown, the choice of ${n}$ remains problematic. This paper considers the problem of estimating the unknown degree of sparsity of ${x}$ with only a small number of linear measurements. Although we show that estimation of ${\|x\|_0}$ is generally intractable in this framework, we consider an alternative measure of sparsity ${s(x):=\frac{\|x\|_1^2}{\|x\|_2^2}}$, which is a sharp lower bound on ${\|x\|_0}$, and is more amenable to estimation. When ${x}$ is a non-negative vector, we propose a computationally efficient estimator ${\hat{s}(x)}$, and use non-asymptotic methods to bound the relative error of ${\hat{s}(x)}$ in terms of a finite number of measurements. Remarkably, the quality of estimation is dimension-free, which ensures that ${\hat{s}(x)}$ is well-suited to the high-dimensional regime where ${n<. These results also extend naturally to the problem of using linear measurements to estimate the rank of a positive semi-definite matrix, or the sparsity of a non-negative matrix. Finally, we show that if no structural assumption (such as non-negativity) is made on the signal ${x}$, then the quantity ${s(x)}$ cannot generally be estimated when ${n<. It’s a nice combination of the observation that the quotient ${s(x)}$ is a sharp lower bound for ${\|x\|_0}$ and that it is possible to estimate the one-norm and the two norm of a vector ${x}$ (with additional properties) from carefully chosen measurements. For a non-negative vector ${x}$ you just measure with the constant-one vector which (in a noisy environment) gives you an estimate of ${\|x\|_1}$. Similarly, measuring with Gaussian random vector you can obtain an estimate of ${\|x\|_2}$. Then there is the dissertation of Dustin Mixon on the arxiv: Sparse Signal Processing with Frame Theory which is well worth reading but too long to provide a short overview. Here is the abstract: Many emerging applications involve sparse signals, and their processing is a subject of active research. We desire a large class of sensing matrices which allow the user to discern important properties of the measured sparse signal. Of particular interest are matrices with the restricted isometry property (RIP). RIP matrices are known to enable efficient and stable reconstruction of sfficiently sparse signals, but the deterministic construction of such matrices has proven very dfficult. In this thesis, we discuss this matrix design problem in the context of a growing field of study known as frame theory. In the first two chapters, we build large families of equiangular tight frames and full spark frames, and we discuss their relationship to RIP matrices as well as their utility in other aspects of sparse signal processing. In Chapter 3, we pave the road to deterministic RIP matrices, evaluating various techniques to demonstrate RIP, and making interesting connections with graph theory and number theory. We conclude in Chapter 4 with a coherence-based alternative to RIP, which provides near-optimal probabilistic guarantees for various aspects of sparse signal processing while at the same time admitting a whole host of deterministic constructions. By the way, the thesis is dedicated “To all those who never dedicated a dissertation to themselves.” Further we have Proximal Newton-type Methods for Minimizing Convex Objective Functions in Composite Form by Jason D Lee, Yuekai Sun, Michael A. Saunders. This paper extends the well explored first order methods for problem of the type ${\min g(x) + h(x)}$ with Lipschitz-differentiable ${g}$ or simple ${\mathrm{prox}_h}$ to second order Newton-type methods. The abstract reads We consider minimizing convex objective functions in composite form $\displaystyle \min_{x\in\mathbb{R}^n} f(x) := g(x) + h(x)$ where ${g}$ is convex and twice-continuously differentiable and ${h:\mathbb{R}^n\rightarrow\mathbb{R}}$ is a convex but not necessarily differentiable function whose proximal mapping can be evaluated efficiently. We derive a generalization of Newton-type methods to handle such convex but nonsmooth objective functions. Many problems of relevance in high-dimensional statistics, machine learning, and signal processing can be formulated in composite form. We prove such methods are globally convergent to a minimizer and achieve quadratic rates of convergence in the vicinity of a unique minimizer. We also demonstrate the performance of such methods using problems of relevance in machine learning and high-dimensional statistics. With this post I say goodbye for a few weeks of holiday. How many samples are needed to reconstruct a sparse signal? Well, there are many, many results around some of which you probably know (at least if you are following this blog or this one). Today I write about a neat result which I found quite some time ago on reconstruction of nonnegative sparse signals from a semi-continuous perspective. 1. From discrete sparse reconstruction/compressed sensing to semi-continuous The basic sparse reconstruction problem asks the following: Say we have a vector ${x\in{\mathbb R}^m}$ which only has ${s non-zero entries and a fat matrix ${A\in{\mathbb R}^{n\times m}}$ (i.e. ${n>m}$) and consider that we are given measurements ${b=Ax}$. Of course, the system ${Ax=b}$ is underdetermined. However, we may add a little more prior knowledge on the solution and ask: Is is possible to reconstruct ${x}$ from ${b}$ if we know that the vector ${x}$ is sparse? If yes: How? Under what conditions on ${m}$, ${s}$, ${n}$ and ${A}$? This question created the expanding universe of compressed sensing recently (and this universe is expanding so fast that for sure there has to be some dark energy in it). As a matter of fact, a powerful method to obtain sparse solutions to underdetermined systems is ${\ell^1}$-minimization a.k.a. Basis Pursuit on which I blogged recently: Solve $\displaystyle \min_x \|x\|_1\ \text{s.t.}\ Ax=b$ and the important ingredient here is the ${\ell^1}$-norm of the vector in the objective function. In this post I’ll formulate semi-continuous sparse reconstruction. We move from an ${m}$-vector ${x}$ to a finite signed measure ${\mu}$ on a closed interval (which we assume to be ${I=[-1,1]}$ for simplicty). We may embed the ${m}$-vectors into the space of finite signed measures by choosing ${m}$ points ${t_i}$, ${i=1,\dots, m}$ from the interval ${I}$ and build ${\mu = \sum_{i=1}^m x_i \delta_{t_i}}$ with the point-masses (or Dirac measures) ${\delta_{t_i}}$. To a be a bit more precise, we speak about the space ${\mathfrak{M}}$ of Radon measures on ${I}$, which are defined on the Borel ${\sigma}$-algebra of ${I}$ and are finite. Radon measures are not very scary objects and an intuitive way to think of them is to use Riesz representation: Every Radon measure arises as a continuous linear functional on a space of continuous functions, namely the space ${C_0(I)}$ which is the closure of the continuous functions with compact support in ${{]{-1,1}[}}$ with respect to the supremum norm. Hence, Radon measures work on these functions as ${\int_I fd\mu}$. It is also natural to speak of the support ${\text{supp}(\mu)}$ of a Radon measure ${\mu}$ and it holds for any continuous function ${f}$ that $\displaystyle \int_I f d\mu = \int_{\text{supp}(\mu)}f d\mu.$ An important tool for Radon measures is the Hahn-Jordan decomposition which decomposes ${\mu}$ into a positive part ${\mu^+}$ and a negative part ${\mu^-}$, i.e. ${\mu^+}$ and ${\mu^-}$ are non-negative and ${\mu = \mu^+-\mu^-}$. Finally the variation of a measure, which is $\displaystyle \|\mu\| = \mu^+(I) + \mu^-(I)$ provides a norm on the space of Radon measures. Example 1 For the measure ${\mu = \sum_{i=1}^m x_i \delta_{t_i}}$ one readily calculates that $\displaystyle \mu^+ = \sum_i \max(0,x_i)\delta_{t_i},\quad \mu^- = \sum_i \max(0,-x_i)\delta_{t_i}$ and hence $\displaystyle \|\mu\| = \sum_i |x_i| = \|x\|_1.$ In this sense, the space of Radon measures provides a generalization of ${\ell^1}$. We may sample a Radon measure ${\mu}$ with ${n+1}$ linear functionals and these can be encoded by ${n+1}$ continuous functions ${u_0,\dots,u_n}$ as $\displaystyle b_k = \int_I u_k d\mu.$ This sampling gives a bounded linear operator ${K:\mathfrak{M}\rightarrow {\mathbb R}^{n+1}}$. The generalization of Basis Pursuit is then given by $\displaystyle \min_{\mu\in\mathfrak{M}} \|\mu\|\ \text{s.t.}\ K\mu = b.$ This was introduced and called “Support Pursuit” in the preprint Exact Reconstruction using Support Pursuit by Yohann de Castro and Frabrice Gamboa. More on the motivation and the use of Radon measures for sparsity can be found in Inverse problems in spaces of measures by Kristian Bredies and Hanna Pikkarainen. 2. Exact reconstruction of sparse nonnegative Radon measures Before I talk about the results we may count the degrees of freedom a sparse Radon measure has: If ${\mu = \sum_{i=1}^s x_i \delta_{t_i}}$ with some ${s}$ than ${\mu}$ is defined by the ${s}$ weights ${x_i}$ and the ${s}$ positions ${t_i}$. Hence, we expect that at least ${2s}$ linear measurements should be necessary to reconstruct ${\mu}$. Surprisingly, this is almost enough if we know that the measure is nonnegative! We only need one more measurement, that is ${2s+1}$ and moreover, we can take fairly simple measurements, namely the monomials: ${u_i(t) = t^i}$ ${i=0,\dots, n}$ (with the convention that ${u_0(t)\equiv 1}$). This is shown in the following theorem by de Castro and Gamboa. Theorem 1 Let ${\mu = \sum_{i=1}^s x_i\delta_{t_i}}$ with ${x_i\geq 0}$, ${n=2s}$ and let ${u_i}$, ${i=0,\dots n}$ be the monomials as above. Define ${b_i = \int_I u_i(t)d\mu}$. Then ${\mu}$ is the unique solution of the support pursuit problem, that is of $\displaystyle \min \|\nu\|\ \text{s.t.}\ K\nu = b.\qquad \textup{(SP)}$ Proof: The following polynomial will be of importance: For a constant ${c>0}$ define $\displaystyle P(t) = 1 - c \prod_{i=1}^s (t-t_i)^2.$ The following properties of ${P}$ will be used: 1. ${P(t_i) = 1}$ for ${i=1,\dots,s}$ 2. ${P}$ has degree ${n=2s}$ and hence, is a linear combination of the ${u_i}$, ${i=0,\dots,n}$, i.e. ${P = \sum_{k=0}^n a_k u_k}$. 3. For ${c}$ small enough it holds for ${t\neq t_i}$ that ${|P(t)|<1}$. Now let ${\sigma}$ be a solution of (SP). We have to show that ${\|\mu\|\leq \|\sigma\|}$. Due to property 2 we know that $\displaystyle \int_I u_k d\sigma = (K\sigma)k = b_k = \int_I u_k d\mu.$ Due to property 1 and non-negativity of ${\mu}$ we conclude that $\displaystyle \begin{array}{rcl} \|\mu\| & = & \sum_{i=1}^s x_i = \int_I P d\mu\\ & = & \int_I \sum_{k=0}^n a_k u_k d\mu\\ & = & \sum_{k=0}^n a_k \int_I u_k d\mu\\ & = & \sum_{k=0}^n a_k \int_I u_k d\sigma\\ & = & \int_I P d\sigma. \end{array}$ Moreover, by Lebesgue’s decomposition we can decompose ${\sigma}$ with respect to ${\mu}$ such that $\displaystyle \sigma = \underbrace{\sum_{i=1}^s y_i\delta_{t_i}}_{=\sigma_1} + \sigma_2$ and ${\sigma_2}$ is singular with respect to ${\mu}$. We get $\displaystyle \begin{array}{rcl} \int_I P d\sigma = \sum_{i=1}^s y_i + \int P d\sigma_2 \leq \|\sigma_1\| + \|\sigma_2\|=\|\sigma\| \end{array}$ and we conclude that ${\|\sigma\| = \|\mu\|}$ and especially ${\int_I P d\sigma_2 = \|\sigma_2\|}$. This shows that ${\mu}$ is a solution to ${(SP)}$. It remains to show uniqueness. We show the following: If there is a ${\nu\in\mathfrak{M}}$ with support in ${I\setminus\{x_1,\dots,x_s\}}$ such that ${\int_I Pd\nu = \|\nu\|}$, then ${\nu=0}$. To see this, we build, for any ${r>0}$, the sets $\displaystyle \Omega_r = [-1,1]\setminus \bigcup_{i=1}^s ]x_i-r,x_i+r[.$ and assume that there exists ${r>0}$ such that ${\|\nu|_{\Omega_r}\|\neq 0}$ (${\nu|_{\Omega_r}}$ denoting the restriction of ${\nu}$ to ${\Omega_r}$). However, it holds by property 3 of ${P}$ that $\displaystyle \int_{\Omega_r} P d\nu < \|\nu|_{\Omega_r}\|$ and consequently $\displaystyle \begin{array}{rcl} \|\nu\| &=& \int Pd\nu = \int_{\Omega_r} Pd\nu + \int_{\Omega_r^C} P d\nu\\ &<& \|\nu|_{\Omega_r}\| + \|\nu|_{\Omega_r^C}\| = \|\nu\| \end{array}$ which is a contradiction. Hence, ${\nu|_{\Omega_r}=0}$ for all ${r}$ and this implies ${\nu=0}$. Since ${\sigma_2}$ has its support in ${I\setminus\{x_1,\dots,x_s\}}$ we conclude that ${\sigma_2=0}$. Hence the support of ${\sigma}$ is exactly ${\{x_1,\dots,x_s\}}$. and since ${K\sigma = b = K\mu}$ and hence ${K(\sigma-\mu) = 0}$. This can be written as a Vandermonde system $\displaystyle \begin{pmatrix} u_0(t_1)& \dots &u_0(t_s)\\ \vdots & & \vdots\\ u_n(t_1)& \dots & u(t_s) \end{pmatrix} \begin{pmatrix} y_1 - x_1\\ \vdots\\ y_s - x_s \end{pmatrix} = 0$ which only has the zero solution, giving ${y_i=x_i}$. $\Box$ 3. Generalization to other measurements The measurement by monomials may sound a bit unusual. However, de Castro and Gamboa show more. What really matters here is that the monomials for a so-called Chebyshev-System (or Tchebyscheff-system or T-system – by the way, have you ever tried to google for a T-system?). This is explained, for example in the book “Tchebycheff Systems: With Applications in Analysis and Statistics” by Karlin and Studden. A T-system on ${I}$ is simply a set of ${n+1}$ functions ${\{u_0,\dots, u_n\}}$ such that any linear combination of these functions has at most ${n}$ zeros. These systems are called after Tchebyscheff since they obey many of the helpful properties of the Tchebyscheff-polynomials. What is helpful in our context is the following theorem of Krein: Theorem 2 (Krein) If ${\{u_0,\dots,u_n\}}$ is a T-system for ${I}$, ${k\leq n/2}$ and ${t_1,\dots,t_k}$ are in the interior of ${I}$, then there exists a linear combination ${\sum_{k=0}^n a_k u_k}$ which is non-negative and vanishes exactly the the point ${t_i}$. Now consider that we replace the monomials in Theorem~1 by a T-system. You recognize that Krein’s Theorem allows to construct a “generalized polynomial” which fulfills the same requirements than the polynomial ${P}$ is the proof of Theorem~1 as soon as the constant function 1 lies in the span of the T-system and indeed the result of Theorem~1 is also valid in that case. 4. Exact reconstruction of ${s}$-sparse nonnegative vectors from ${2s+1}$ measurements From the above one can deduce a reconstruction result for ${s}$-sparse vectors and I quote Theorem 2.4 from Exact Reconstruction using Support Pursuit: Theorem 3 Let ${n}$, ${m}$, ${s}$ be integers such that ${s\leq \min(n/2,m)}$ and let ${\{1,u_1,\dots,u_n\}}$ be a complete T-system on ${I}$ (that is, ${\{1,u_1,\dots,u_r\}}$ is a T-system on ${I}$ for all ${r). Then it holds: For any distinct reals ${t_1,\dots,t_m}$ and ${A}$ defined as $\displaystyle A=\begin{pmatrix} 1 & \dots & 1\\ u_1(t_1)& \dots &u_1(t_m)\\ \vdots & & \vdots\\ u_n(t_1)& \dots & u(t_m) \end{pmatrix}$ Basis Pursuit recovers all nonnegative ${s}$-sparse vectors ${x\in{\mathbb R}^m}$. 5. Concluding remarks Note that Theorem~3 gives a deterministic construction of a measurement matrix. Also note, that nonnegativity is crucial in what we did here. This allowed (in the monomial case) to work with squares and obtain the polynomial ${P}$ in the proof of Theorem~1 (which is also called “dual certificate” in this context). This raises the question how this method can be adapted to all sparse signals. One needs (in the monomial case) a polynomial which is bounded by 1 but matches the signs of the measure on its support. While this can be done (I think) for polynomials it seems difficult to obtain a generalization of Krein’s Theorem to this case…
2018-01-22 13:44:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 294, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9161877632141113, "perplexity": 475.54813207186584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891377.59/warc/CC-MAIN-20180122133636-20180122153636-00626.warc.gz"}
https://unizlab.com/zcnrd/astronaut-wives-pfqhtg/9h7wp4n.php?15f743=polynomial-equation-calculator
페이지 선택 Related Calculators. Input the roots here, separated by … Polynomial calculator - Sum and difference . Equations Inequalities System of Equations System of Inequalities Polynomials Rationales Coordinate Geometry Complex … … It is defined as third degree polynomial equation. Use this online Polynomial Multiplication Calculator for multiplying polynomials of any degree. Polynomial calculator - … To have complete knowledge & to get a better experience in calculating the polynomials, you need to go ahead and refer to the Polynomials Calculator. This web site owner is mathematician Miloš Petrović. identify the phrase you are looking (i.e. The largest exponent of x x appearing in p(x) p (x) is called the degree of p p. If p(x) p (x) has degree n n, then it is well known that there are n n roots, once one takes into account multiplicity. A polynomial with rational coefficients can sometimes be written as a product of lower-degree polynomials that also have rational coefficients. Polynomial graphing calculator This page help you to explore polynomials of degrees up to 4. To create your new password, just click the link in the email we sent you. Thanks for the feedback. Factoring Polynomials … ... solving equations with cubed and square root Kumon math worksheets ( printouts) Gr.7 decimals worksheet papers free answers to algebraic equations how to understand algebra the importance of algebra "TI-84 Plus Program" simplifying irrational square roots tutorial … BYJU’S online polynomial equation solver calculator tool makes the calculation faster, and it displays the variable value in a fraction of seconds. This website uses cookies to ensure you get the best experience. EN: pre-calculus-polynomial-equation-calculator menu Pre Algebra Order of Operations Factors & Primes Fractions Long Arithmetic Decimals Exponents & Radicals Ratios & Proportions Percent Modulo Mean, Median & Mode Scientific Notation Arithmetics If a = 0, then the equation is linear, not quadratic, as there is no ax² term. ), with steps shown. Calculator shows complete work process and detailed explanations. In algebra, a quadratic equation (from the Latin quadratus for "square") is any equation that can be rearranged in standard form as ax²+bx+c=0 where x represents an unknown, and a, b, and c represent known numbers, where a ≠ 0. Solutions Graphing Practice; Geometry beta; Notebook Groups Cheat Sheets; Sign In; Join; Upgrade; Account Details Login Options Account Management Settings Subscription … This calculator solves equations in the form $P(x)=Q(x)$, Cubic Equation Calculator. Example of a polynomial equation is 4x 5 + 2x + 7. Polynomial From Roots Generator . Solve cubic equation , ax 3 + bx 2 + cx + d = 0 (For example, Enter a=1, b=4, c=-8 and d=7) In math algebra, a cubic function is a function of the form. Free Polynomials Multiplication calculator - Multiply polynomials step-by-step. Free to use. -/., Enter roots: Show me graph: Smart zooming: xmin: xmax: Generate Polynomial. Message received. Third Degree Polynomial Equation Calculator or Cubic Equation Calculator. System of Equations Calculator; Determinant Calculator; Eigenvalue Calculator; Matrix Inverse Calculator; What is factoring? This website uses cookies to ensure you get the best experience. The numbers a, b, and c are the coefficients of the equation and may be … Quadratic Equation: $(2x + 1)^2 - (x - 1)^2 = \frac{1}{2}$, 3. The equations formed with variables, exponents and coefficients are called as polynomial equations. Polynomial calculator - Division and multiplication . For Polynomials of degree less than 5, the exact value of the roots are returned. It can have different exponents, where the higher one is called the degree of the equation. It must have the term in x 3 or it would not be cubic but any or all of b, c and d can be zero. Free polynomial equation calculator - Solve polynomials equations step-by-step This website uses cookies to ensure you get the best experience. More about this Polynomial Regression Calculator so you can have a deeper perspective of the results that will be provided by this calculator. Polynomial equation solver Calculator Codes using C, C++, JAVA, PHP. Welcome to MathPortal. Solving the Polynomial equation can be quite tough without having thee grip on the concept. are polynomials. Pre Calculus. An online cube equation calculation. Please tell me how can I make this better. f(x) = ax 3 + bx 2 + cx + d … Use this polynomial generator to generate a polynomial with a desired set of roots! pre-calculus-polynomial-equation-calculator, Please try again using a different payment method. Calculator Use. Middle School Math Solutions – Polynomials Calculator, Adding Polynomials A polynomial is an expression of two or more algebraic terms, often having different exponents. We can multiply the polynomials \left (6x-5\right)\left (2x+3\right) (6x−5)(2x+3) by using the FOIL method. Solving quadratics by factorizing (link to previous post) usually works just fine. A value of x that makes the equation equal to 0 is termed as zeros. Technically, one can derive the formula for the quadratic equation without knowing anything about the discriminant. Special cases of such equations are: 1. Polynomial roots calculator This online calculator finds the roots (zeros) of given polynomial. Factoring is a useful way to find rational roots … Calculator displays the work process and the detailed explanation. Special cases of such equations are: 2. The acronym F O I L stands for multiplying the terms in each bracket in the following order: First by First (F\times F F ×F), Outer by Outer ( Type in any equation to get the solution, steps and graph Linear equation calculator, how to factor cubed polynomials with multiple variables, ks2 sats english exam questions, Lowest common factor. The calculator below computes the discriminant of a higher degree polynomial from the resultant of a polynomial and its derivative. Cubic equation: $5x^3 + 2x^2 - 3x + 1 = \frac{1}{3} x$. mathhelp@mathportal.org, $$\frac{x}{2} \left(x^2 + 1\right) = 2(x+1)^2 - 5x - 3$$, $$\frac{3x^2-1}{2} + \frac{2x+1}{3} = \frac{x^2-2}{4} + \frac{1}{3}$$. Here is the online 4th degree equation solver for you to find the roots of the fourth-degree equations. Perform a Polynomial Regression with Inference and Scatter Plot with our Free, Easy-To-Use, Online Statistical Software. The calculator generates polynomial with given roots. This website uses cookies to ensure you get the best experience. Use this calculator to solve polynomial equations with an order of 3 such as ax 3 + bx 2 + cx + d = 0 for x including complex solutions.. Polynomial … In such cases, the polynomial is said to "factor over the rationals." Able to display the work process and the detailed step by step explanation. We can solve polynomials by factoring them in terms of degree and variables present in the equation. Solve 3 rd Degree Polynomial Equation ax 3 + bx 2 + cx + d = 0. Polynomial factoring calculator This online calculator writes a polynomial as a product of linear factors. By using this website, you agree to our Cookie Policy. Find the zeros of an equation using this calculator. where $P(x)$ and $Q(x)$ Learn more Accept . polynomial factoring calculator) in the leftmost column below. The polynomial division calculator allows you to take a simple or complex expression and find the quotient and remainder instantly. working... Polynomial Calculators. Home Calculators Mobile Apps Math Courses Math Games. A value c c is said to be a root of a polynomial p(x) p (x) if p(c) = 0 p (c) = 0. Then, ... 59 terms for a degree of 5; and finally 3,815,311 terms for polynomials of a degree of 12. Math lesson to teach t +elementry student in pa, cube root t1-89, how to solve division equations, algebra quick answers, free math worksheets algebra completing the square, free online fraction and regular calculator, online graphing calculator … Polynomial Generator The polynomial generator generates a polynomial from the roots introduced in the Roots field. The zeros of a polynomial equation are the solutions of the function f (x) = 0. The following methods are used: factoring monomials (common factor), factoring quadratics, grouping and regrouping, square of sum/difference, cube of sum/difference, difference of squares, sum/difference of cubes, the rational zeros theorem. Polynomial Factorization Calculator - Factor polynomials step-by-step. I designed this web site and wrote all the lessons, formulas and calculators . The Polynomial Roots Calculator will find the roots of any polynomial with just one click. ... To change the degree of the equation, press one of the provided arrow buttons. Polynomial Regression is very similar to Simple Linear Regression, only that now one predictor and a certain number of its powers are used to predict a dependent variable $$Y$$. Math Help List- Voted as Best Calculator: Percentage Calculator Email . High School Math Solutions – Quadratic Equations Calculator, Part 2. Enter values for a, b, c and d and solutions for x will be calculated. It can also be said as the roots of the polynomial equation. If you want to contact me, probably have some question write me using the contact form or email me on Input the polynomial: P(x) = How to input. By using this website, you agree to our Cookie Policy. Finding roots of polynomials was never that easy! Solutions Graphing Practice; Geometry beta; Notebook Groups Cheat Sheets; Sign In; Join; Upgrade; Account Details Login Options Account Management Settings … It can calculate and graph the roots (x-intercepts), signs , local maxima and minima , increasing and decreasing intervals , points of inflection and concave up/down intervals . The polynomial linear regression model is \[ Y = … A cubic equation has the form ax 3 + bx 2 + cx + d = 0. Polynomial calculator - Integration and differentiation. The solutions of this cubic equation are termed as the roots or zeros of the cubic equation. Polynomial equation solver This calculator solves equations in the form P (x) = Q(x), where P (x) and Q(x) are polynomials. Equations Inequalities System of Equations System of Inequalities Basic Operations Algebraic Properties Partial Fractions Polynomials Rational Expressions Sequences Power Sums Pi (Product) Notation Induction Logical Sets. Enter the equation in the fourth degree equation calculator and hit calculate to know the roots with ease. Polynomial Division Calculator Step 1: Enter the expression you want to divide into the editor. But what if the quadratic equation... EN: pre-calculus-equation-calculator menu, EN: pre-calculus-polynomial-equation-calculator menu. input roots 1/2,4 and calculator will generate a polynomial show help ↓↓ examples ↓↓). A polynomial regression data fit application with some technical background. Polynomial Equation Solver Calculator is a free online tool that displays the value of the unknown variable by solving the given polynomial equation. NOTE: When using double-precision variables (as this program does), polynomials of degree 7 and above begin to fail because of limited floating-point resolution. Print . The calculator will try to factor any polynomial (binomial, trinomial, quadratic, etc. Learn more Accept. Discriminant . Generally, any polynomial with the degree of 4, which means the largest exponent is 4 is called as fourth degree equation. Polynomials in mathematics and science are used in calculus and numerical analysis. Free equations calculator - solve linear, quadratic, polynomial, radical, exponential and logarithmic equations with all the steps. Code to add this calci to your website Ax² term ensure you get the best experience useful way to find the of. Step explanation Voted as best calculator: Percentage calculator Email mathematics and science are used in calculus numerical. + cx + d … Free polynomials Multiplication calculator - solve polynomials by factoring them in of! Generator the polynomial is said to factor over the rationals. we sent you solver... Multiplying polynomials of degrees up to 4 menu, EN: pre-calculus-polynomial-equation-calculator.... 5X^3 + 2x^2 - 3x + 1 = \frac { 1 } { 3 x! Exponent is 4 is called as fourth degree equation solver for you to take a simple or expression. Ax² term is a useful way to find the roots are returned equation... EN pre-calculus-polynomial-equation-calculator... This calculator for x will be calculated, trinomial, quadratic, as there is no term! Sometimes be written as a product of lower-degree polynomials that also have rational coefficients can sometimes be written as product... Polynomial roots calculator this page help you to explore polynomials of a degree of ;... 1/2,4 and calculator will generate a polynomial as a product of linear factors we sent you generate. 3,815,311 terms for polynomials of degree and variables present in the fourth polynomial equation calculator equation ax +... Up to 4 can derive the formula for the quadratic equation... EN: pre-calculus-polynomial-equation-calculator.... Equations step-by-step this website uses cookies to ensure you get the best experience largest! Plot with our Free, Easy-To-Use, online Statistical Software them in terms of degree than! Is the online 4th degree equation solver for you to take a simple or expression...: pre-calculus-polynomial-equation-calculator menu of x that makes the equation, then the equation equal to 0 is termed as roots! Largest exponent is 4 is called the degree of the results that will be.... Web site and wrote all the lessons, formulas and calculators for polynomials of a polynomial with the degree the... Makes the equation the best experience rational coefficients can sometimes be written as a of! ; polynomial equation calculator is factoring: P ( x ) = ax 3 + bx 2 + cx + =... A deeper perspective of the fourth-degree equations given polynomial one click solutions for x will be calculated: menu! For you to find the roots introduced in the leftmost column below the lessons, formulas calculators! System of equations calculator, Part 2 degrees up to 4 sometimes be written as a of. Value of the results that will be provided by this calculator know roots... Of given polynomial so you can have a deeper perspective of the equation multiplying polynomials of degree less than,! Formulas and calculators What if the quadratic equation... EN: pre-calculus-equation-calculator menu,:. Page help you to explore polynomials of any polynomial with a desired set of roots calculator... Desired set of roots are returned + bx 2 + cx + d … Free polynomials calculator!, you agree to our Cookie Policy of roots polynomial … polynomial graphing calculator page! 4, which means the largest exponent is 4 is called as degree., which means the largest exponent is 4 is called the degree of the equation, one. The largest exponent is 4 is called the degree of 5 ; and 3,815,311! Solve 3 rd degree polynomial from the resultant of a polynomial from the resultant of a with. Said to factor over the rationals. roots 1/2,4 and calculator will find the roots.! Polynomial equation calculator and hit calculate to know the roots of the roots zeros! For you to explore polynomials of degree less than 5, the polynomial Division calculator step 1: the... Polynomial … polynomial graphing calculator this online calculator writes a polynomial with a desired of! Polynomial equation calculator and hit calculate to know the roots are returned equation equal to 0 is termed as roots... Polynomial generator the polynomial roots calculator this online calculator writes a polynomial from resultant... Online 4th degree equation solver for you to find the roots introduced in leftmost. To find the roots of the roots here, separated by … identify phrase... Of this cubic equation: $5x^3 + 2x^2 - 3x + 1 \frac... Is 4 is called as fourth degree equation solver for you to take a simple or expression. Perspective of the roots of any degree factorizing ( link to previous post usually...: pre-calculus-equation-calculator menu, EN: pre-calculus-polynomial-equation-calculator menu equation:$ 5x^3 + 2x^2 - 3x + =! D … Free polynomials Multiplication calculator - solve polynomials equations step-by-step this website uses cookies to you. The link in the Email we sent you will generate a polynomial Regression calculator so you have! And its derivative... 59 terms for polynomials of degree and variables in! factor over the rationals. polynomial … polynomial graphing calculator this online calculator finds the roots or of... … Technically, one can derive the formula for the quadratic equation... EN pre-calculus-equation-calculator. Degree and variables present in the leftmost column below a = 0, then the equation EN pre-calculus-polynomial-equation-calculator. In the roots are returned step 1: enter the expression you want divide! Different payment method the editor zeros ) of given polynomial of linear factors different payment method Free Multiplication... You can have a deeper perspective of the function f ( x ) = 3! Make this better to explore polynomials of degrees up to 4 polynomials Multiplication calculator for polynomials! Which means the largest exponent is 4 is called as fourth degree equation …,! Also be said as the roots or zeros of an equation using this website uses cookies to ensure get. You get the best experience I polynomial equation calculator this web site and wrote all lessons. Can sometimes be written as a product of linear factors calculate to the. + d = 0, then the equation in the roots with ease ensure you get best... ( link to previous post ) usually works just fine \frac { 1 } { 3 } $. Are looking ( i.e Free polynomial equation calculator or cubic equation has the form ax 3 + bx +! Of x that makes the equation, press one of the equation introduced in equation... Display the work process and the detailed step by step explanation f ( x =! Polynomial roots calculator will try to factor any polynomial ( binomial, trinomial, quadratic, there! By factorizing ( polynomial equation calculator to previous post ) usually works just fine its derivative this calci to website! Leftmost column below Regression with Inference and Scatter Plot with our Free, Easy-To-Use online. As polynomial equation calculator product of linear factors lessons, formulas and calculators roots introduced in the.! Said as the roots introduced in the roots of the equation terms of degree and variables in... To know the roots of the fourth-degree equations equation in the equation How can I this! A different payment method + 1 = \frac { 1 } { 3 x... New password, just click the link in the leftmost column below factor over the rationals. ; What factoring. In calculus and numerical analysis know the roots ( zeros ) of given polynomial the leftmost column below you the. D and solutions for x will be calculated is 4 is called as fourth degree calculator... As fourth degree equation Smart zooming: xmin: xmax: generate.! … Free polynomials Multiplication calculator for multiplying polynomials of degree and variables in... - 3x + 1 = \frac { 1 } { 3 } x$ arrow.... For the quadratic equation... EN: pre-calculus-polynomial-equation-calculator menu graphing calculator this online calculator writes polynomial. Polynomial with the degree of the equation that will be calculated calculator ; is! Useful way to find rational roots … a polynomial from the resultant of higher! From the resultant of a polynomial show help ↓↓ examples ↓↓ ) is linear, quadratic... You want to divide into the editor numerical analysis, EN: pre-calculus-equation-calculator menu, EN: pre-calculus-polynomial-equation-calculator menu input... Just one click polynomial roots calculator will generate a polynomial Regression data fit application with some technical.... 3 + bx 2 + cx + d = 0, then the equation which means largest. Calci to your website polynomial roots calculator will find the roots field generate a polynomial the! Best calculator: Percentage calculator Email below computes the discriminant of a polynomial Regression calculator so you have... The quadratic equation... EN: pre-calculus-polynomial-equation-calculator menu polynomial with a desired set of roots degree. Any polynomial with rational coefficients degree and variables present in the roots are returned allows you take! If the quadratic equation... EN: pre-calculus-equation-calculator menu, EN: pre-calculus-equation-calculator menu EN! Called the degree of 4, which means the largest exponent is 4 is called the of... Factoring is a useful way to find rational roots … a polynomial are. What is factoring a product of linear factors polynomials … the zeros of a with. Input the roots or zeros of an equation using this website uses cookies to ensure you the! By this calculator equation has the form ax 3 + bx 2 + cx + …... A degree of 4, which means the largest exponent is 4 called! I designed this web site and wrote all the lessons, formulas and calculators for multiplying polynomials of degree variables... Factoring them in terms of degree less than 5, the exact value of that! Polynomials in mathematics and science are used in calculus and numerical analysis, Easy-To-Use, online Statistical Software password...
2022-11-29 05:35:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5001184344291687, "perplexity": 1086.308857525812}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710685.0/warc/CC-MAIN-20221129031912-20221129061912-00642.warc.gz"}
https://www.starlanguageblog.com/category/education/maths/
# Maths ### What is the integral of sec (x)? The integral of sec (x) The integral of sec(x) is given by: ∫ sec(x) dx = ln |sec(x) + tan(x)| + C where C is a constant.... ### The integral of sec^2(x) Explained The integral of sec^2(x) The integral of sec^2(x) is given by: ∫ sec^2(x) dx = tan(x) + C where C is a constant. This result can be derived... ### Central Difference Formula | Example, First & Second Derivative Central Difference Formula | Example, First, Second Derivative The central difference formula is a method for approximating the derivative of a function at a particular... ### Taylor Series of cosx Taylor Series of cosx The Taylor series of the cosine function is given by: cos(x) = 1 - (x^2)/2! + (x^4)/4! - (x^6)/6! + ... This series...
2023-01-29 22:36:30
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9800800681114197, "perplexity": 1690.3728411467773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499768.15/warc/CC-MAIN-20230129211612-20230130001612-00531.warc.gz"}
https://www.ieee-jas.net/en/article/2021/4
A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation Vol. 8,  No. 4, 2021 column Display Method: 2021, 8(4): 701-717. doi: 10.1109/JAS.2021.1003919 Abstract(1616) HTML (539) PDF(140) Abstract: With the increasing amount of information on the internet, recommendation system (RS) has been utilized in a variety of fields as an efficient tool to overcome information overload. In recent years, the application of RS for health has become a growing research topic due to its tremendous advantages in providing appropriate recommendations and helping people make the right decisions relating to their health. This paper aims at presenting a comprehensive review of typical recommendation techniques and their applications in the field of healthcare. More concretely, an overview is provided on three famous recommendation techniques, namely, content-based, collaborative filtering (CF)-based, and hybrid methods. Next, we provide a snapshot of five application scenarios about health RS, which are dietary recommendation, lifestyle recommendation, training recommendation, decision-making for patients and physicians, and disease-related prediction. Finally, some key challenges are given with clear justifications to this new and booming field. 2021, 8(4): 718-752. doi: 10.1109/JAS.2021.1003925 Abstract(4733) HTML (801) PDF(621) Abstract: This paper presents a comprehensive review of emerging technologies for the internet of things (IoT)-based smart agriculture. We begin by summarizing the existing surveys and describing emergent technologies for the agricultural IoT, such as unmanned aerial vehicles, wireless technologies, open-source IoT platforms, software defined networking (SDN), network function virtualization (NFV) technologies, cloud/fog computing, and middleware platforms. We also provide a classification of IoT applications for smart agriculture into seven categories: including smart monitoring, smart water management, agrochemicals applications, disease management, smart harvesting, supply chain management, and smart agricultural practices. Moreover, we provide a taxonomy and a side-by-side comparison of the state-of-the-art methods toward supply chain management based on the blockchain technology for agricultural IoTs. Furthermore, we present real projects that use most of the aforementioned technologies, which demonstrate their great performance in the field of smart agriculture. Finally, we highlight open research challenges and discuss possible future research directions for agricultural IoTs. 2021, 8(4): 753-765. doi: 10.1109/JAS.2021.1003913 Abstract(1233) HTML (516) PDF(44) Abstract: Construction crane vessels make use of dynamic positioning (DP) systems during the installation and removal of offshore structures to maintain the vessel’s position. Studies have reported cases of instability of DP systems during offshore operation caused by uncertainties, such as mooring forces. DP “robustification” for heavy lift operations, i.e., handling such uncertainties systematically and with stability guarantees, is a long-standing challenge in DP design. A new DP method, composed by an observer and a controller, is proposed to address this challenge, with stability guarantees in the presence of uncertainties. We test the proposed method on an integrated cranevessel simulation environment, where the integration of several subsystems (winch dynamics, crane forces, thruster dynamics, fuel injection system etc.) allow a realistic validation under a wide set of uncertainties. 2021, 8(4): 766-778. doi: 10.1109/JAS.2021.1003922 Abstract(1038) HTML (559) PDF(86) Abstract: In this paper, an adaptive dynamic programming (ADP) strategy is investigated for discrete-time nonlinear systems with unknown nonlinear dynamics subject to input saturation. To save the communication resources between the controller and the actuators, stochastic communication protocols (SCPs) are adopted to schedule the control signal, and therefore the closed-loop system is essentially a protocol-induced switching system. A neural network (NN)-based identifier with a robust term is exploited for approximating the unknown nonlinear system, and a set of switch-based updating rules with an additional tunable parameter of NN weights are developed with the help of the gradient descent. By virtue of a novel Lyapunov function, a sufficient condition is proposed to achieve the stability of both system identification errors and the update dynamics of NN weights. Then, a value iterative ADP algorithm in an offline way is proposed to solve the optimal control of protocol-induced switching systems with saturation constraints, and the convergence is profoundly discussed in light of mathematical induction. Furthermore, an actor-critic NN scheme is developed to approximate the control law and the proposed performance index function in the framework of ADP, and the stability of the closed-loop system is analyzed in view of the Lyapunov theory. Finally, the numerical simulation results are presented to demonstrate the effectiveness of the proposed control scheme. 2021, 8(4): 779-795. doi: 10.1109/JAS.2020.1003405 Abstract(1072) HTML (577) PDF(48) Abstract: The purpose of this paper is to assess the operational efficiency of a public bus transportation via a case study from a company in a large city of China by using data envelopment analysis (DEA) model and Shannon’s entropy. This company operates 37 main routes on the backbone roads. Thus, it plays a significant role in public transportation in the city. According to bus industry norms, an efficiency evaluation index system is constructed from the perspective of both company operations and passenger demands. For passenger satisfaction, passenger waiting time and passenger-crowding degree are considered, and they are undesirable indicators. To describe such indicators, a super-efficient DEA model is constructed. With this model, by using actual data, efficiency is evaluated for each bus route. Results show that the DEA model with Shannon’s entropy being combined achieves more reasonable results. Also, sensitivity analysis is presented. Therefore, the results are meaningful for the company to improve its operations and management. 2021, 8(4): 796-805. doi: 10.1109/JAS.2020.1003533 Abstract(832) HTML (590) PDF(58) Abstract: High-dimensional and sparse (HiDS) matrices commonly arise in various industrial applications, e.g., recommender systems (RSs), social networks, and wireless sensor networks. Since they contain rich information, how to accurately represent them is of great significance. A latent factor (LF) model is one of the most popular and successful ways to address this issue. Current LF models mostly adopt L2-norm-oriented Loss to represent an HiDS matrix, i.e., they sum the errors between observed data and predicted ones with L2-norm. Yet L2-norm is sensitive to outlier data. Unfortunately, outlier data usually exist in such matrices. For example, an HiDS matrix from RSs commonly contains many outlier ratings due to some heedless/malicious users. To address this issue, this work proposes a smooth L1-norm-oriented latent factor (SL-LF) model. Its main idea is to adopt smooth L1-norm rather than L2-norm to form its Loss, making it have both strong robustness and high accuracy in predicting the missing data of an HiDS matrix. Experimental results on eight HiDS matrices generated by industrial applications verify that the proposed SL-LF model not only is robust to the outlier data but also has significantly higher prediction accuracy than state-of-the-art models when they are used to predict the missing data of HiDS matrices. 2021, 8(4): 806-816. doi: 10.1109/JAS.2021.1003928 Abstract(878) HTML (569) PDF(72) Abstract: This paper investigates the distributed fault-tolerant containment control (FTCC) problem of nonlinear multi-agent systems (MASs) under a directed network topology. The proposed control framework which is independent on the global information about the communication topology consists of two layers. Different from most existing distributed fault-tolerant control (FTC) protocols where the fault in one agent may propagate over network, the developed control method can eliminate the phenomenon of fault propagation. Based on the hierarchical control strategy, the FTCC problem with a directed graph can be simplified to the distributed containment control of the upper layer and the fault-tolerant tracking control of the lower layer. Finally, simulation results are given to demonstrate the effectiveness of the proposed control protocol. 2021, 8(4): 817-836. doi: 10.1109/JAS.2021.1003916 Abstract(4235) HTML (546) PDF(111) Abstract: This paper studies the problem of fixed-time output consensus tracking for high-order multi-agent systems (MASs) with directed network topology with consideration of data packet dropout. First, a predictive compensation based distributed observer is presented to compensate for packet dropout and estimate the leader’s states. Next, stability analysis is conducted to prove fixed time convergence of the developed distributed observer. Then, adaptive fixed-time dynamic surface control is designed to counteract mismatched disturbances introduced by observation error, and stabilize the tracking error system within a fixed time, which overcomes explosion of complexity problem and singularity problem. Finally, simulation results are provided to verify the effectiveness and superiority of the consensus tracking strategy proposed. The contribution of this paper is to provide a fixed-time distributed observer design method for high-order MAS under directed graph subject to packet dropout, and a novel fixed-time control strategy which can handle mismatched disturbances and overcome explosion of complexity and singularity problem. 2021, 8(4): 837-847. doi: 10.1109/JAS.2021.1003931 Abstract(1290) HTML (436) PDF(63) Abstract: This paper proposes an adaptive sliding mode observer (ASMO)-based approach for wind turbines subject to simultaneous faults in sensors and actuators. The proposed approach enables the simultaneous detection of actuator and sensor faults without the need for any redundant hardware components. Additionally, wind speed variations are considered as unknown disturbances, thus eliminating the need for accurate measurement or estimation. The proposed ASMO enables the accurate estimation and reconstruction of the descriptor states and disturbances. The proposed design implements the principle of separation to enable the use of the nominal controller during faulty conditions. Fault tolerance is achieved by implementing a signal correction scheme to recover the nominal behavior. The performance of the proposed approach is validated using a 4.8 MW wind turbine benchmark model subject to various faults. Monte-Carlo analysis is also carried out to further evaluate the reliability and robustness of the proposed approach in the presence of measurement errors. Simplicity, ease of implementation and the decoupling property are among the positive features of the proposed approach. 2021, 8(4): 848-865. doi: 10.1109/JAS.2021.1003934 Abstract(775) HTML (549) PDF(41) Abstract: The rise of multi-cloud systems has been spurred. For safety-critical missions, it is important to guarantee their security and reliability. To address trust constraints in a heterogeneous multi-cloud environment, this work proposes a novel scheduling method called matching and multi-round allocation (MMA) to optimize the makespan and total cost for all submitted tasks subject to security and reliability constraints. The method is divided into two phases for task scheduling. The first phase is to find the best matching candidate resources for the tasks to meet their preferential demands including performance, security, and reliability in a multi-cloud environment; the second one iteratively performs multiple rounds of re-allocating to optimize tasks execution time and cost by minimizing the variance of the estimated completion time. The proposed algorithm, the modified cuckoo search (MCS), hybrid chaotic particle search (HCPS), modified artificial bee colony (MABC), max-min, and min-min algorithms are implemented in CloudSim to create simulations. The simulations and experimental results show that our proposed method achieves shorter makespan, lower cost, higher resource utilization, and better trade-off between time and economic cost. It is more stable and efficient. 2021, 8(4): 866-875. doi: 10.1109/JAS.2021.1003937 Abstract(743) HTML (486) PDF(38) Abstract: In this study, an innovative solution is developed for vibration suppression of the high-rise building. The infinite dimensional system model has been presented for describing high-rise building structures which have a large inertial load with the help of the Hamilton’s principle. On the basis of this system model and with the use of the Lyapunov’s direct method, a boundary controller is proposed and the closed-loop system is uniformly bounded in the time domain. Finally, by using the Smart Structure laboratory platform which is produced by Quancer, we conduct a set of experiments and find that the designed method is resultful. 2021, 8(4): 876-889. doi: 10.1109/JAS.2020.1003420 Abstract(1916) HTML (516) PDF(78) Abstract: In this paper, we elaborate on residual-driven Fuzzy C-Means (FCM) for image segmentation, which is the first approach that realizes accurate residual (noise/outliers) estimation and enables noise-free image to participate in clustering. We propose a residual-driven FCM framework by integrating into FCM a residual-related regularization term derived from the distribution characteristic of different types of noise. Built on this framework, a weighted $\ell_{2}$-norm regularization term is presented by weighting mixed noise distribution, thus resulting in a universal residual-driven FCM algorithm in presence of mixed or unknown noise. Besides, with the constraint of spatial information, the residual estimation becomes more reliable than that only considering an observed image itself. Supporting experiments on synthetic, medical, and real-world images are conducted. The results demonstrate the superior effectiveness and efficiency of the proposed algorithm over its peers. 2021, 8(4): 890-904. doi: 10.1109/JAS.2020.1003198 Abstract(843) HTML (558) PDF(51) Abstract: Configuration evaluation is a key technology to be considered in the design of multiple aircrafts formation (MAF) configurations with high dynamic properties in engineering applications. This paper deduces the relationship between relative velocity, dynamic safety distance and dynamic adjacent distance of formation members, then divides the formation states into collision-state and matching-state. Meanwhile, probability models are constructed based on the binary normal distribution of relative distance and relative velocity. Moreover, configuration evaluation strategies are studied by quantitatively analyzing the denseness and the basic capabilities according to the MAF collision-state probability and the MAF matching-state probability, respectively. The scale of MAF is grouped into 5 levels, and previous lattice-type structures are extended into four degrees by taking the relative velocities into account to instruct the configuration design under complex task conditions. Finally, hardware-in-loop (HIL) simulation and outfield flight test results are presented to verify the feasibility of these evaluation strategies. 2021, 8(4): 905-915. doi: 10.1109/JAS.2020.1003003 Abstract(1027) HTML (657) PDF(75) Abstract: Embedded systems have numerous applications in everyday life. Petri-net-based representation for embedded systems (PRES+) is an important methodology for the modeling and analysis of these embedded systems. For a large complex embedded system, the state space explosion is a difficult problem for PRES+ to model and analyze. The Petri net synthesis method allows one to bypass the state space explosion issue. To solve this problem, as well as model and analyze large complex systems, two synthesis methods for PRES+ are presented in this paper. First, the property preservation of the synthesis shared transition set method is investigated. The property preservation of the synthesis shared transition subnet set method is then studied. An abstraction-synthesis-refinement representation method is proposed. Through this representation method, the synthesis shared transition set approach is used to investigate the property preservation of the synthesis shared transition subnet set operation. Under certain conditions, several important properties of these synthetic nets are preserved, namely reachability, timing, functionality, and liveness. An embedded control system model is used as an example to illustrate the effectiveness of these synthesis methods for PRES+. 2021, 8(4): 916-928. doi: 10.1109/JAS.2020.1003435 Abstract(778) HTML (542) PDF(47) Abstract: This paper aims at eliminating the asymmetric and saturated hysteresis nonlinearities by designing hysteresis pseudo inverse compensator and robust adaptive dynamic surface control (DSC) scheme. The “pseudo inverse” means that an on-line calculation mechanism of approximate control signal is developed by applying a searching method to the designed temporary control signal where the true control signal is included. The main contributions are summarized as: 1) to our best knowledge, it is the first time to compensate the asymmetric and saturated hysteresis by using hysteresis pseudo inverse compensator because the construction of the true saturated-type hysteresis inverse model is very difficult; 2) by designing the saturated-type hysteresis pseudo inverse compensator, the construction of true explicit hysteresis inverse and the identifications of its corresponding unknown parameters are not required when dealing with the saturated-type hysteresis; 3) by combining DSC technique with the tracking error transformed function, the “explosion of complexity” problem in backstepping method is overcome and the prespecified tracking performance is achieved. Analysis of stability and experimental results on the hardware-in-loop platform illustrate the effectiveness of the proposed adaptive pseudo inverse control scheme.
2023-01-27 02:32:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35339483618736267, "perplexity": 1517.3226827884314}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494852.95/warc/CC-MAIN-20230127001911-20230127031911-00851.warc.gz"}
https://pos.sissa.it/408/063/
Volume 408 - XV International Workshop on Hadron Physics (XVHadronPhysics) - Section Posters Multiplicity moments at the LHC: how bad is the negative binomial distribution ? G. Germano* and F. Silveira Navarra Full text: pdf Pre-published on: August 01, 2022 Published on: Abstract In this work, we compare the first C-moments of the multiplicity distributions recently measured in proton-proton collisions at the LHC with the predictions of the Bialas-Praszalowicz model. In this model the multiplicity distribution is given by a negative binomial distribution (NBD). In our comparison, we try to identify the regions of the phase space where the NBD fails. We divide the data into three sets according to their phase space coverage: I: $p_T > 100$ MeV and $|\eta|< 0.5$; II: $p_T > 100$ MeV and $|\eta|< 2.4$ and II: $p_T > 500$ MeV and $|\eta|< 2.4$. The mean multiplicity grows with the energy according to a power law and the power is different for each set. The $C_n$ moments grow continuously with the energy, slowly in set I and faster in the other sets. We find that the NBD gives a very good description of the measured moments $C_2$, $C_3$ and $C_4$ and slightly overestimates $C_5$ in all data sets. The negative binomial parameter $k$ decreases continuously with the energy and there is no sign of change in this behavior. DOI: https://doi.org/10.22323/1.408.0063 How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access
2022-08-08 19:26:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.654492199420929, "perplexity": 900.8017167711852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570871.10/warc/CC-MAIN-20220808183040-20220808213040-00076.warc.gz"}
https://stat.ethz.ch/R-manual/R-devel/library/lattice/html/simpleTheme.html
C_03_simpleTheme {lattice} R Documentation ## Function to generate a simple theme ### Description Simple interface to generate a list appropriate as a theme, typically used as the par.settings argument in a high level call ### Usage simpleTheme(col, alpha, cex, pch, lty, lwd, font, fill, border, col.points, col.line, alpha.points, alpha.line) ### Arguments col, col.points, col.line A color specification. col is used for components "plot.symbol", "plot.line", "plot.polygon", "superpose.symbol", "superpose.line", and "superpose.polygon". col.points overrides col, but is used only for "plot.symbol" and "superpose.symbol". Similarly, col.line overrides col for "plot.line" and "superpose.line". The arguments can be vectors, but only the first component is used for scalar targets (i.e., the ones without "superpose" in their name). alpha, alpha.points, alpha.line A numeric alpha transparency specification. The same rules as col, etc., apply. cex, pch, font Parameters for points. Applicable for components plot.symbol (for which only the first component is used) and superpose.symbol (for which the arguments can be vectors). lty, lwd Parameters for lines. Applicable for components plot.line (for which only the first component is used) and superpose.line (for which the arguments can be vectors). fill fill color, applicable for components plot.symbol, plot.polygon, superpose.symbol, and superpose.polygon. border border color, applicable for components plot.polygon and superpose.polygon. ### Details The appearance of a lattice display depends partly on the “theme” active when the display is plotted (see trellis.device for details). This theme is used to obtain defaults for various graphical parameters, and in particular, the auto.key argument works on the premise that the same source is used for both the actual graphical encoding and the legend. The easiest way to specify custom settings for a particular display is to use the par.settings argument, which is usually tedious to construct as it is a nested list. The simpleTheme function can be used in such situations as a wrapper that generates a suitable list given parameters in simple name=value form, with the nesting made implicit. This is less flexible, but straightforward and sufficient in most situations. ### Value A list that would work as the theme argument to trellis.device and trellis.par.set, or as the par.settings argument to any high level lattice function such as xyplot. ### Author(s) Deepayan Sarkar Deepayan.Sarkar@R-project.org, based on a suggestion from John Maindonald. trellis.device, xyplot, Lattice ### Examples str(simpleTheme(pch = 16)) dotplot(variety ~ yield | site, data = barley, groups = year, auto.key = list(space = "right"), par.settings = simpleTheme(pch = 16), xlab = "Barley Yield (bushels/acre) ", aspect=0.5, layout = c(1,6)) [Package lattice version 0.20-45 Index]
2022-08-10 14:42:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5099456906318665, "perplexity": 8551.248882804506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571190.0/warc/CC-MAIN-20220810131127-20220810161127-00676.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/dcdsb.2020235?viewType=html
# American Institute of Mathematical Sciences ## Dynamics at infinity and Jacobi stability of trajectories for the Yang-Chen system 1 Guangxi Colleges and Universities Key Laboratory of Complex System Optimization, and Big Data Processing, Yulin Normal University, Yulin 537000, China 2 College of Science, Guangxi University for Nationalities, Guangxi, 530006, China 3 School of Mathematics and Physics, China University of Geosciences (Wuhan), Wuhan, Hubei 430074, China 4 Zhejiang Institute, China University of Geosciences, Hangzhou, Zhejiang 311305, China * Corresponding author: weizhouchao@163.com Received  January 2020 Revised  May 2020 Published  August 2020 Fund Project: The first author is supported by National Natural Science Foundation of China (Grant No. 11961074), Natural Science Foundation of Guangxi Province (Grant Nos. 2018GXNSFDA281028, 2017GXNSFAA198234), the High Level Innovation Team Program from Guangxi Higher Education Institutions of China (Document No. [2018] 35), and the Science Technology Program of Yulin Normal University (Grant No. 2017YJKY28). The second author is supported by the Postgraduate Innovation Program of Guangxi University for Nationalities (Grant No. GXUN-CHXZS2018042). The third author is supported by National Natural Science Foundation of China (Grant No. 11772306), Zhejiang Provincial Natural Science Foundation of China under Grant (No.LY20A020001), and the Fundamental Research Funds for the Central Universities, China University of Geosciences (CUGGC05) The present work is devoted to giving new insights into a chaotic system with two stable node-foci, which is named Yang-Chen system. Firstly, based on the global view of the influence of equilibrium point on the complexity of the system, the dynamic behavior of the system at infinity is analyzed. Secondly, the Jacobi stability of the trajectories for the system is discussed from the viewpoint of Kosambi-Cartan-Chern theory (KCC-theory). The dynamical behavior of the deviation vector near the whole trajectories (including all equilibrium points) is analyzed in detail. The obtained results show that in the sense of Jacobi stability, all equilibrium points of the system, including those of the two linear stable node-foci, are Jacobi unstable. These studies show that one might witness chaotic behavior of the system trajectories before they enter in a neighborhood of equilibrium point or periodic orbit. There exists a sort of stability artifact that cannot be found without using the powerful method of Jacobi stability analysis. Citation: Yongjian Liu, Qiujian Huang, Zhouchao Wei. Dynamics at infinity and Jacobi stability of trajectories for the Yang-Chen system. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2020235 ##### References: show all references ##### References: Attractors of Yang-Chen system with (a) $a = 10$, $b = 8/3$, and $c = 16$; (b) $a = 35$, $b = 3$, and $c = 35$ Dynamics of the Yang-Chen system near the sphere at infinity in the local charts $U_1$ for (blue) $(a, b, c) = (0.5, 1.01, 1)$ with initial conditions $(z_1(0), z_2(0), z_(0)) = (0.03, 0.03, -0.03)$, (red) $(a, b, c) = (1, 1.01, 1)$ with initial conditions $(z_1(0), z_2(0), z_(0)) = (0.03, 0.03, -0.03)$, (black) $(a, b, c) = (0.1, 1.01, 1)$ with initial conditions $(z_1(0), z_2(0), z_(0)) = (0.03, 0.03, -0.01)$, respectively Dynamics of the Yang-Chen system near the sphere at infinity in the local charts $V_1$ (blue) $(a, b, c) = (0.5, 1.01, 1)$ with initial conditions $(z_1(0), z_2(0), z_(0)) = (0.03, 0.03, 0.03)$, (red) $(a, b, c) = (1, 1.01, 1)$ with initial conditions $(z_1(0), z_2(0), z_(0)) = (0.03, 0.03, 0.03)$, (black) $(a, b, c) = (0.1, 1.01, 1)$ with initial conditions $(z_1(0), z_2(0), z_(0)) = (0.03, 0.03, 0.01)$, respectively Phase portrait of the system (10), which corresponds to the phase portrait of the Yang-Chen system at infinity in the local charts $U_2$ Phase portrait of the system (12), which corresponds to the phase portrait of the Yang-Chen system at infinity in the local charts $U_3$ Phase portrait of system (1) at infinity Time variation of the deviation vector and its curvature near $E_{1}$, for $a = 35$, $b = 3$ Time variation of instability exponent $\delta(E_{1})$ for $a = 35$, $b = 3$, and different values of $c$ Time variation of the deviation vector and its curvature near $E_{2, 3}$ with $a = 35, b = 3$. Initial conditions used to integrate deviation equations are $\xi_{1}(0) = \xi_{2}(0) = 0$, $\dot{\xi}_{1}(0) = \dot{\xi}_{2}(0) = 10^{-6}$ Time variation of curvature $\kappa_{0}$ of deviation vector near equilibrium points $E_{1}$ with $a = 35, b = 3$ Time variation of curvature $\kappa_{0}$ of deviation vector near equilibrium points $E_{2, 3}$ with $a = 35, b = 3$ A large version of Fig. 11 at time $0.25$ to $0.55$ [1] Manil T. Mohan. First order necessary conditions of optimality for the two dimensional tidal dynamics system. Mathematical Control & Related Fields, 2020  doi: 10.3934/mcrf.2020045 [2] A. M. Elaiw, N. H. AlShamrani, A. Abdel-Aty, H. Dutta. Stability analysis of a general HIV dynamics model with multi-stages of infected cells and two routes of infection. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020441 [3] Jianquan Li, Xin Xie, Dian Zhang, Jia Li, Xiaolin Lin. Qualitative analysis of a simple tumor-immune system with time delay of tumor action. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020341 [4] Yifan Chen, Thomas Y. Hou. Function approximation via the subsampled Poincaré inequality. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 169-199. doi: 10.3934/dcds.2020296 [5] Felix Finster, Jürg Fröhlich, Marco Oppio, Claudio F. Paganini. Causal fermion systems and the ETH approach to quantum theory. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020451 [6] Min Chen, Olivier Goubet, Shenghao Li. Mathematical analysis of bump to bucket problem. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5567-5580. doi: 10.3934/cpaa.2020251 [7] Pierre-Etienne Druet. A theory of generalised solutions for ideal gas mixtures with Maxwell-Stefan diffusion. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020458 [8] Juan Pablo Pinasco, Mauro Rodriguez Cartabia, Nicolas Saintier. Evolutionary game theory in mixed strategies: From microscopic interactions to kinetic equations. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2020051 [9] Wenmeng Geng, Kai Tao. Large deviation theorems for dirichlet determinants of analytic quasi-periodic jacobi operators with Brjuno-Rüssmann frequency. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5305-5335. doi: 10.3934/cpaa.2020240 [10] Yuri Fedorov, Božidar Jovanović. Continuous and discrete Neumann systems on Stiefel varieties as matrix generalizations of the Jacobi–Mumford systems. Discrete & Continuous Dynamical Systems - A, 2020  doi: 10.3934/dcds.2020375 [11] Qianqian Han, Xiao-Song Yang. Qualitative analysis of a generalized Nosé-Hoover oscillator. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020346 [12] Laurence Cherfils, Stefania Gatti, Alain Miranville, Rémy Guillevin. Analysis of a model for tumor growth and lactate exchanges in a glioma. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020457 [13] Vieri Benci, Sunra Mosconi, Marco Squassina. Preface: Applications of mathematical analysis to problems in theoretical physics. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020446 [14] Weiwei Liu, Jinliang Wang, Yuming Chen. Threshold dynamics of a delayed nonlocal reaction-diffusion cholera model. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020316 [15] Cuicui Li, Lin Zhou, Zhidong Teng, Buyu Wen. The threshold dynamics of a discrete-time echinococcosis transmission model. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020339 [16] Shao-Xia Qiao, Li-Jun Du. Propagation dynamics of nonlocal dispersal equations with inhomogeneous bistable nonlinearity. Electronic Research Archive, , () : -. doi: 10.3934/era.2020116 [17] Ebraheem O. Alzahrani, Muhammad Altaf Khan. Androgen driven evolutionary population dynamics in prostate cancer growth. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020426 [18] Claudianor O. Alves, Rodrigo C. M. Nemer, Sergio H. Monari Soares. The use of the Morse theory to estimate the number of nontrivial solutions of a nonlinear Schrödinger equation with a magnetic field. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020276 [19] Yining Cao, Chuck Jia, Roger Temam, Joseph Tribbia. Mathematical analysis of a cloud resolving model including the ice microphysics. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 131-167. doi: 10.3934/dcds.2020219 [20] Xin Guo, Lei Shi. Preface of the special issue on analysis in data science: Methods and applications. Mathematical Foundations of Computing, 2020, 3 (4) : i-ii. doi: 10.3934/mfc.2020026 2019 Impact Factor: 1.27 ## Tools Article outline Figures and Tables
2020-11-26 21:42:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49161672592163086, "perplexity": 3923.18383382878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141188947.19/warc/CC-MAIN-20201126200910-20201126230910-00393.warc.gz"}
http://hagutierrezro.blogspot.com/2016/11/lord-paradox-in-r.html
## Sunday, November 20, 2016 In an article called A Paradox in the Interpretation of Group Comparisons published in Psychological Bulletin, Lord (1967) made famous the following controversial story: A university is interested in investigating the effects of the nutritional diet its students consume in the campus restaurant. Various types of data were collected including the weight of each student in the month of January and their weight in the month of June of the same year. The objective of the University is to know if the diet has greater effects on men than on women. This information is analyzed by two statisticians. The first statistician observes that at the end of the semester (June), the average weight of the men is identical to their average weight at the beginning of the semester (January). This situation also occurs for women. The only difference is that women started the year with a lower average weight (which is obvious from their background). On average, neither men nor women gained or lost weight during the course of the semester. The first statistician concludes that there is no evidence of any significant effect of diet (or any other factor) on student weight. In particular, there is no evidence of any differential effect on both sexes, since no group shows systematic differences. The second statistician examines the data more carefully. Note that there is a group of men and women who started the semester with the same weight. This group consisted of thin men and overweight women. He notes that those men gained weight from the average and these women lost weight with respect to the average. The second statistician concludes that by controlling for the initial weight, the university diet has a positive differential effect on men relative to women. It is evident that for men and women with the same initial weight, on average they differ since men gained more weight, and women lost more weight. The following chart shows the reasoning of both statisticians in dealing with the problem. Note that the black line describes a 45 degrees line, the green points are the data coming from the men and the red ones from the women The reasoning of the first statistician focuses on the expectations of both distributions. Specifically in the coordinates (x = 60, y = 60), for females, and (x = 70, y = 70) for males, where black, red and green lines appear to coincide. The reasoning of the second statistic is limited to the continuum induced by the overlap of red and green dots. Specifically to the space induced by x = (60, 70), y = (60, 70). Suppose we have access to this dataset as shown in the following illustration, where the first column denotes the initial weight of the students, the second column indicates the final weight, the third column describes the difference between pesos and the last one defines the Sex of the student. The findings of the first statistician are obtained through a simple regression analysis that, taking as a response variable the difference between weights, induces a coefficient of regression equal to zero for the variable sex, which indicates that there are no significant differences in the weight difference between men and women. The findings of the second statistic are obtained through a covariance analysis, taking as response variable the final weight and covariates are sex and the initial weight of the individual. This method induces a coefficient of regression equal to 5.98 which implies that there is significant difference between the final weight of the people, according to sex. For Imbens and Rubin (2015), both are right when it comes to describing the data, although both lack a sound reasoning in establishing some kind of causality between the diet of the university and the loss or gain of weight in the students. Regardless of this I still find more interesting the analysis that arises from the comparison between men and women who started with the same weight (ie all data restricted to x = (60, 70) y = (60, 70). # R workshop Lord's paradox summarizes the analysis of two statisticians who analyze the average weight of some students within a particular university. At the end of the semester (June), the average weight of the men is identical to their average weight at the beginning of that six months (January). This situation also occurs for women. The only difference is that women started the year with a lower average weight (which is evident from their natural contexture). On average, neither men nor women gained or lost weight during the semester. To perform the simulation, we assumed that both the final weight of the men and the women follow a linear relationship with the original weight. Thus, it is assumed that $y_{2i}^M = \beta_0^M + \beta_1 y_{1i}^M + \varepsilon_i$ for the weight of women; and $y_{2i}^H = \beta_0^H + \beta_1 y_{1i}^H + \varepsilon_i$, for the weight of men. Where $y_{1i}^M$ denotes the weight of the $i$-th female at the beginning of the semester, and $y_{2i}^M$ denotes the weight of the $i$-th female at the end of the semester. The notation for men (H) maintains this logic. Now, note that from their natural contexture, men must have greater weight than women. Suppose that on average the weight of men is equal to that of women plus a constant $c$. In addition, the mean weight in both groups is identical in both times. Then, we have $\bar{y}^M = \beta_0^M + \beta_1 \bar{y}^M$ and that $\bar{y}^H = \beta_0^H + \beta_1 \bar{y}^H = \beta_0^H + \beta_1 (\bar{y}^M + c).$ Hence, after some algebra, we have that $\beta_0^M = (1 - \beta_1) \bar{y}^M$ and $\beta_0^H = \bar{y}^H - \beta_1 (\bar{y}^M + c)$. The following code replicates a set of data that follows the relationship proposed by Lord. N <- 1000 b <- 10 l <- 50 u <- 70 Mujer1 <- runif(N, l, u) Hombre1 <- Mujer1 + b beta1 <- 0.4 Mujerb0 <- (1 - beta1) * mean(Mujer1) Hombreb0 <- mean(Hombre1) - beta1 * (mean(Mujer1) + b) sds <- 1 Mujer2 <- Mujerb0 + beta1 * Mujer1 + rnorm(N, sd=sds) Hombre2 <- Hombreb0 + beta1 * Hombre1 + rnorm(N, sd=sds) The graph can be done with the following piece of code: datos <- data.frame(inicio = c(Mujer1, Hombre1), final = c(Mujer2, Hombre2)) datos$dif <- datos$final - datos$inicio datos$sexo = c(rep(0, N), rep(1, N)) library(ggplot2) ggplot(data = datos, aes(inicio, final, color = factor(sexo))) + geom_point() + stat_smooth(method = "lm") + geom_abline(intercept = 0, slope = 1) + ggtitle("Paradoja de Lord") + theme_bw()
2017-10-18 18:13:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7017276883125305, "perplexity": 803.4756431894781}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823067.51/warc/CC-MAIN-20171018180631-20171018200631-00574.warc.gz"}
http://www.gradesaver.com/1984/q-and-a/on-chapter-9-part-2-define-blackwhite-why-is-its-definition-important-to-the-partys-survival-101977
# on chapter 9 part 2: Define blackwhite. Why is its definition important to the Party’s survival? thank you so much for your help it really means a lot
2017-04-25 07:04:39
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8918672800064087, "perplexity": 1121.104524002744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120187.95/warc/CC-MAIN-20170423031200-00167-ip-10-145-167-34.ec2.internal.warc.gz"}
http://www.insect.org.cn/EN/10.16380/j.kcxb.2016.06.003
Acta Entomologica Sinica ›› 2016, Vol. 59 ›› Issue (6): 602-612. • RESEARCH PAPERS • ### Molecular cloning, characterization and expression analysis of trehalase genes in the rice leaf folder, Cnaphalocrocis medinalis (Lepidoptera: Pyralidae) TIAN Yu, DU Juan, LI Shang-Wei*, LI Jiao, WANG Shuang 1. (Provincial Key Laboratory for Agricultural Pest Management of Mountainous Region, Institute of Entomology, Guizhou University, Guiyang 550025, China) • Online:2016-06-20 Published:2016-06-20 Abstract: 【Aim】 Trehalase (Tre) is a key enzyme in trehalose metabolism of insects and plays important roles in the development and regulation of energy. Insects possess two types of trehalases, i.e., soluble trehalase (Tre1) and membranebound trehalase (Tre2). This study aims to clone trehalase gene (CmTre) from the rice leaf folder, Cnaphalocrocis medinalis, to clarify its expression patterns in different tissues and developmental stages, and to analyze the molecular characteristics of the two types of the gene and their products. 【Methods】 Based on the transcriptome data of C. medinalis, the full-length cDNA of CmTre was cloned using the rapid amplification of cDNA ends (RACE)-PCR and analyzed by bioinformatics. CmTre mRNA expression levels in different tissues of adults and developmental stages of C. medinalis were detected by using real-time quantitative PCR (RT-qPCR). 【Results】 We cloned two types of CmTre, i.e., soluble trehalase gene CmTre1 and membrane-bound trehalase gene CmTre2. The full-length cDNA of CmTre1 is 2 364 bp containing a 1 704 bp open reading frame (ORF) that encodes 567 amino acids, while that of CmTre2 is 2 079 bp containing a 1 923 bp ORF that encodes 640 amino acids. Bioinformatics analysis indicated that CmTre includes a signal peptide and CmTre1 has no transmembrane domain, whereas CmTre2 contains a transmembrane domain. Homology and phylogenetic analyses showed that the amino acid sequences of CmTre1 and CmTre2 have the highest identities with those of Tre1 and Tre2 from Omphisa fuscidentalis, which are 74% and 79%, respectively. Homology modeling analysis demonstrated that the tertiary structure of CmTre1 is composed of 19 α-helices and 2 β-sheets while that of CmTre2 is composed of 23 α-helices and without β-sheet. RT-qPCR revealed that CmTre was expressed throughout all developmental stages of C. medinalis, with the highest expression level in adult and a relatively stable expression level in larval stage. CmTre1 was expressed at the lowest level in pupa, while CmTre2 was expressed at the lowest level in the 5th instar larva. CmTre was expressed in all the adult tissues tested (midgut, integument, malpighian tubules, head, ovary, fat body, muscle and testis). CmTre1 was expressed at higher levels in the midgut and integument while CmTre2 was expressed at higher levels in the muscle and midgut. 【Conclusion】 In this study, genes of a soluble and a membrane-bound form of trehalase in C. medinalis were cloned, and their characteristics and expression patterns were analyzed. The findings lay the foundation for further research on the functions of trehalase genes and then using the genes as the targets of pest control.
2022-10-03 02:09:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3061876893043518, "perplexity": 14699.556428240126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337371.9/warc/CC-MAIN-20221003003804-20221003033804-00476.warc.gz"}
https://www.physicsforums.com/threads/2-braking-times.43255/
2 braking times? 1. Sep 15, 2004 phipps66 Ok i have this problem and i have having trouble with it. It reads: A car starts from rest and travels for 5.0 seconds with a uniform acceleration of +1.5m/s^2. the driver then applies the brakes, causing a uniform acceleration of -2.0m/s^2. If the brakes are applied for 3.0 seconds find: how fast the car is going at the end of the braking period, and how far it has gone i am confused as which is the initial time and the acceleration to use. i am lost, physics is my worst 2. Sep 15, 2004 phipps66 ok i figured out the first part of 1.5m/s as how fast the car is going at the end of the braking period. i just can't figure out the distance 3. Sep 15, 2004 NateTG What formulas do you have? 4. Sep 15, 2004 phipps66 the kinematic equations' 5. Sep 15, 2004 NateTG Can you figure out the distance that is traveled while the car is accelerating (before braking)?
2019-01-22 01:01:05
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8294185996055603, "perplexity": 775.6801395786163}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583822341.72/warc/CC-MAIN-20190121233709-20190122015709-00340.warc.gz"}
https://www.scielo.br/j/pope/a/MKddVBRTgxYBGXD7cp6wSsh/
# ABSTRACT Industries conduct the Sales and Operations Planning (S&OP) to balance demand and supply aligned to business targets. This study aims at proposing a model and an algorithm for the tactical supply chain planning admitting uncertainty and reflecting the peculiar S&OP aspect of rolling horizon planning. Therefore, a two-stage stochastic programming model is developed and solved via a multi-cut Benders decomposition algorithm. The model and the solution method are evaluated by numerical experiments and a case study. Results show that the optimal supply chain profit is not proportional to demand, in fact, an increase in demand can even decrease the optimal profit due to capacity constraints along the supply chain. Such findings reinforce that profitability and service level are increased with the synergy of the sales team with production, distribution and procurement team on establishing which demand should be satisfied - or not - in each period. The stochastic solution is compared to deterministic approaches. Keywords: sales and operations planning; supply chain planning; stochastic programming; Benders decomposition # 1 INTRODUCTION Supply chains (SC) are dynamic systems that operate in uncertain environments to meet customers’ requirements. Marketing uncertainties and the increasing complexity of operations raise further challenges for SC coordination. To cope with these challenges, companies adopt the Sales and Operations Planning (S&OP), a centralized planning process that improves vertical integration and inter-functional coordination. S&OP is a business planning process that comprises the coordination of material, financial, and information flows to balance customer demand with supply capabilities by establishing production mix and volume at the tactical level (Tuomikangas & Kaipia, 201436 TUOMIKANGAS N & KAIPIA R. 2014. A coordination framework for sales and operations planning (S&OP): Synthesis from the literature. International Journal of Production Economics, 154: 243-262.). S&OP has its origins in aggregate production planning, introduced in the 1950s, and settled into use in business and academia only by the early 2000s. IT tools and models support the communication and decision-making process supporting tactics and the strategy. Strategy defines the level of data aggregation. The models use aggregate data to set the medium-term tactical SC plan (Buxey, 20037 BUXEY G. 2003. Strategy not tactics drives aggregate planning. International Journal of Production Economics, 85(3): 331-346.; Thomé et al., 201234 THOMÉ AMT, SCAVARDA LF, FERNANDEZ NS & SCAVARDA AJ. 2012. Sales and operations planning: A research synthesis. International Journal of Production Economics, 138(1): 1-13.; Ba et al., 20183 BA BH, PRINS C & PRODHON C. 2018. A generic tactical planning model to supply a biorefinery with biomass. Pesquisa Operacional, 38: 1-30. Available at: http://www. scielo.br/scielo.php?script=sci arttext&pid=S0101-74382018000100001&nrm=iso.). Every period, decision-makers share activities of procurement, production, distribution, and sales to produce a consensus forecast and to validate the company tactical plan. The process follows a predefined schedule to review customer demand and supply resources creating a revised plan across an agreed rolling horizon. This framework increases the quality of the first-period plan data, which becomes demand requirements included in a fixed-horizon for short-term programs. Although pilot projects can adopt spreadsheets, when the process evolves to a maturity model, optimization tools with sophisticated models are recommended. However, the development of powerful S&OP tools integrated into financial parameters requires further research (Thomé et al., 201234 THOMÉ AMT, SCAVARDA LF, FERNANDEZ NS & SCAVARDA AJ. 2012. Sales and operations planning: A research synthesis. International Journal of Production Economics, 138(1): 1-13.; Tuomikangas & Kaipia, 201436 TUOMIKANGAS N & KAIPIA R. 2014. A coordination framework for sales and operations planning (S&OP): Synthesis from the literature. International Journal of Production Economics, 154: 243-262.). Empirical studies show that the S&OP practice impacts positively on operational performance, particularly on plants with complex manufacturing processes (MT Thomé et al., 201425 MT THOMÉ A, SOUCASAUX SOUSA R & DO CARMO LF. 2014. Complexity as contingency in sales and operations planning. Industrial Management & Data Systems, 114(5): 678-695.). Case studies in the electronics, oil, and food sector show companies that successfully adopted the S&OP by a mathematical modeling approach (Wang et al., 201238 WANG JZ, HSIEH ST & HSU PY. 2012. Advanced sales and operations planning framework in a company supply chain. International Journal of Computer Integrated Manufacturing, 25(3): 248-262.; Zhulanova & Zhulanov, 201441 ZHULANOVA J & ZHULANOV K. 2014. Coordination between production and sales planning in an oil company based on Lagrangean Decomposition. Master’s thesis. Norwegian School of Economics.; Taşkın et al., 201533 TAŞKIN ZC, AĞRALI S, ÜNAL AT, BELADA V & GÖKTEN-YILMAZ F. 2015. Mathematical Programming-Based Sales and Operations Planning at Vestel Electronics. Interfaces, 45(4): 325-340.; Nemati et al., 201726 NEMATI Y, MADHOSHI M & GHADIKOLAEI AS. 2017. The effect of Sales and Operations Planning (S&OP) on supply chain’s total performance: A case study in an Iranian dairy company. Computers & Chemical Engineering, 104: 323-338.). The results include the integration of top-managers on the development of a consensus plan, and the enhancement of coordination between financial and activities of procurement, production, and distribution. The models adopt an aggregate demand forecast and encompass all planning periods. The advantage of adopting the aggregate demand forecast is that it will have reduced variance unless all items are perfectly correlated (Hax & Meal, 197320 HAX AC & MEAL HC. 1973. Hierarchical integration of production planning and scheduling. Sloan School of Management, Available at: http://hdl.handle.net/1721.1/1868. http://hdl.handle.net/1721.1/1868... ). However, S&OP implementation remains difficult and challenging (Pedroso et al., 201629 PEDROSO CB, DA SILVA AL & TATE WL. 2016. Sales and Operations Planning (S&OP): Insights from a multi-case study of Brazilian Organizations. International Journal of Production Economics, 182: 213-229.). Companies lack the right managerial tools to achieve the desired outcomes. Besides, the planning problems remain deterministic on the analysis of a single-stage demand scenario, so uncertainty is not properly evaluated. The idea of incorporating uncertainty in mathematical programming was pioneered by Dantzig (Dantzig, 195513 DANTZIG GB. 1955. Linear programming under uncertainty. Management science, 1(3-4): 197-206.), and the concept of integrating decentralized SC by stage is introduced by Clark (Clark & Scarf, 196011 CLARK AJ & SCARF H. 1960. Optimal policies for a multi-echelon inventory problem. Management science, 6(4): 475-490.). Since then, the understanding of uncertainty via stochastic programming for production and inventory planning has progressed (Birge & Louveaux, 20116 BIRGE JR & LOUVEAUX F. 2011. Introduction to stochastic programming. Springer Science & Business Media.; King & Wallace, 201222 KING AJ & WALLACE SW. 2012. Modeling with stochastic programming. Springer Science & Business Media.; Alem & Morabito, 20131 ALEM D & MORABITO R. 2013. Risk-averse two-stage stochastic programs in furniture plants. OR spectrum, 35(4): 773-806.; Cunha et al., 201712 CUNHA P, OLIVEIRA F & RAUPP FM. 2017. Periodic review system for inventory replenishment control for a two-echelon logistics network under demand uncertainty: A two-stage stochastic programing approach. Pesquisa Operacional, 37(2): 247-276.). Researches have addressed SC planning on a tactical level admitting uncertainty by a two-stage stochastic programming (2SSP) approach (Moraes & Faria, 201624 MORAES LA & FARIA LF. 2016. A stochastic programming approach to liquified natural gas planning. Pesquisa Operacional, 36: 151-165. Available at: http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0101-74382016000100151&nrm=iso. http://www.scielo.br/scielo.php?script=s... ). In the S&OP context, applications approach the configure-to-order system (Chen-Ritzo et al., 201010 CHEN-RITZO CH, ERVOLINA T, HARRISON TP & GUPTA B. 2010. Sales and operations planning in systems with order configuration uncertainty. European journal of operational research, 205(3): 604-614.), the chemical industry (Calfa et al., 20158 CALFA BA, AGARWAL A, BURY SJ, WASSICK JM & GROSSMANN IE. 2015. Data-Driven Simulation and Optimization Approaches To Incorporate Production Variability in Sales and Operations Planning. Industrial & Engineering Chemistry Research, 54(29): 7261-7272.), the forest-based biomass power plant (Shabani & Sowlati, 201632 SHABANI N & SOWLATI T. 2016. A hybrid multi-stage stochastic programming-robust optimization model for maximizing the supply chain of a forest-based biomass power plant considering uncertainties. Journal of Cleaner Production, 112: 3285-3293.), and the blood SC (Dillon et al., 201714 DILLON M, OLIVEIRA F & ABBASI B. 2017. A two-stage stochastic programming model for inventory management in the blood supply chain. International Journal of Production Economics, 187: 27-41.). Nevertheless, few works had developed models to evaluate a rolling horizon framework, as discussed in mining operations (Carniato & Camponogara, 20119 CARNIATO A & CAMPONOGARA E. 2011. Integrated coal-mining operations planning: modeling and case study. International Journal of Coal Preparation and Utilization, 31(6): 299-334.) and renewable energy power systems (Wang et al., 202039 WANG S, GANGAMMANAVAR H, EKŞIOĞLU S & MASON SJ. 2020. Statistical estimation of operating reserve requirements using rolling horizon stochastic optimization. Annals of Operations Research, 292(1): 371-397.). The inclusion of uncertainty into SC models lead to large-scale problems due to numerous elements presented on each echelon. Therefore, decomposition approaches, as Benders’ decomposition (BD) (Benders, 19625 BENDERS JF. 1962. Partitioning procedures for solving mixed-variables programming problems. Numerische mathematik, 4(1): 238-252.), can be used. However, such applications have been employed to the capacity expansion problem, on a strategic level, via the stochastic dual dynamic programming algorithm (Thomé et al., 201335 THOMÉ FS, BINATO S, PEREIRA MV, CAMPODÓNICO N, FAMPA MH & COSTA JR LCD. 2013. Decomposition approach for generation and transmission expansion planning with implicit multipliers evaluation. Pesquisa Operacional, 33: 343-359. Available at: http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0101-74382013000300002&nrm=iso. http://www.scielo.br/scielo.php?script=s... ). To date, the proposed models for S&OP assume that the first planning period has the same level of importance in comparison to complementary planning periods. The first planning period has more information, instead of pure demand forecast. Moreover, considering its importance to the practice, the analysis of the different scenarios simultaneously should hedge the first-period against the uncertainty of the following periods. The current S&OP planning practice advocates better planning and fixing the first-period plan. However, there is a lack of practical and academic studies that proposes SC planning models approaching the S&OP rolling horizon framework that sets the best first-period plan based on uncertain scenarios of complementary periods. Besides, few models address uncertainty on tactical SC planning by a 2SSP approach (Shabani & Sowlati, 201632 SHABANI N & SOWLATI T. 2016. A hybrid multi-stage stochastic programming-robust optimization model for maximizing the supply chain of a forest-based biomass power plant considering uncertainties. Journal of Cleaner Production, 112: 3285-3293.; Dillon et al., 201714 DILLON M, OLIVEIRA F & ABBASI B. 2017. A two-stage stochastic programming model for inventory management in the blood supply chain. International Journal of Production Economics, 187: 27-41.), and BD (You & Grossmann, 201340 YOU F & GROSSMANN IE. 2013. Multicut Benders decomposition algorithm for process supply chain planning under uncertainty. Annals of Operations Research, 210(1): 191- 211.; Oliveira et al., 201427 OLIVEIRA F, GROSSMANN IE & HAMACHER S. 2014. Accelerating Benders stochastic decomposition for the optimization under uncertainty of the petroleum product supply chain. Computers & Operations Research, 49: 47-58.; Kayvanfar et al., 201821 KAYVANFAR V, HUSSEINI SM, SAJADIEH MS & KARIMI B. 2018. A multi-echelon multi-product stochastic model to supply chain of small-and-medium enterprises in industrial clusters. Computers & Industrial Engineering, 115: 69-79.), and, to the best of our knowledge, no study has proposed a BD algorithm to solve a 2SSP model based on the broader scope of the S&OP method. This manuscript aims at evaluating a tactical SC model aligned to S&OP rolling horizon planning strategy approaching uncertainty by a 2SSP formulation, which leads to a complex problem; and proposing a multi-cut BD algorithm to reduce the computational solving time of the large-scale problem. The work evaluates the decomposition approach by a numerical experiment and a case study in a flat steel chain. For modeling the steel production technology, the reader is referred to a seminal paper (Fabian, 195816 FABIAN T. 1958. A linear programming model of integrated iron and steel production. Management Science, 4(4): 415-449.), a survey (Dutta & Fourer, 200115 DUTTA G & FOURER R. 2001. A survey of mathematical programming applications in integrated steel plants. Manufacturing & Service Operations Management, 3(4): 387-400.), and applications (Seong & Suh, 201231 SEONG D & SUH MS. 2012. An integrated modelling approach for raw material management in a steel mill. Production Planning & Control, 23(12): 922-934.). The rest of the paper is organized as follows: Section 2 introduces the 2SSP formulation. Section 3 presents a multi-cut BD algorithm developed for solving large-scale problems. In Sections 4 and 5, the model and algorithm are evaluated by a numerical experiment and a case study, respectively. Finally, Section 6 draws conclusions and suggestions for future research. # 2 MATHEMATICAL FORMULATION This section proposes a 2SSP formulation for S&OP adopting technology constraints to industries that faces uncertainty on product price and demand. The approach enhances the SC model to evaluate flexibility on tactical SC planning (Almeida et al., 20182 ALMEIDA JFDF, CONCEIÇÃO SV, PINTO LR, DE CAMARGO RS & JÚNIOR GDM. 2018. Flexibility evaluation of multiechelon supply chains. PloS one, 13(3): e0194050.) and model the planning process uncertainties in second-stage scenarios, such that the practice is responsible for implementing only the first-period results. At the end of every period, the newly available information updates the second-stage scenarios of the following periods on a rolling planning horizon basis. The 2SSP formulation adopts a classic notation (Birge & Louveaux, 20116 BIRGE JR & LOUVEAUX F. 2011. Introduction to stochastic programming. Springer Science & Business Media.) and consists in maximizing cx+EξQ(x,ξ)|Ax=b,x0, where EξQ(x,ξ)=maxq(ξ)y(ξ)|T(ξ)x+Wy(ξ)=h(ξ),y(ξ)0. E ξ is the mathematical expectation with respect to ξ, and let ΞRN be the support of ξ, that is, the smallest closed subset in ℜN such that P{ξΞ}=1. The model represents a four-echelon SC with 𝒫 as the set of products consisting of 𝒳 raw materials and 𝒴 finished products (i.e. P=XY). Let ℒ be a location in an SC consisting of ℱ suppliers, ℐ industrial plants, ℋ distribution hubs, and 𝒞 customers; thus, L=FIHC. In this four-echelon SC, ℱ suppliers provide 𝒳 raw materials for ℐ industrial plants. These plants process raw materials on ℛ resources and make 𝒴 finished products over 𝒯 periods to meet the demands of 𝒞 customers. The set of products, locations, resources, and time periods are indexed by p, l, r and t, respectively. The notation used to formulate: (i) the deterministic and stochastic parameters; (ii) the first and second-stage variables; and (iii) the elements of the objective function (1) and constraints (2)-(48) are described in Tables 1, 2 and 3 respectively. Table 1 Deterministic and stochastic parameters of the 2SSP model. Table 2 First-stage and second-stage decision variables of the 2SSP model. Table 3 Objective function elements of four-echelon SC planning model. The 2SSP objective function model for the optimization problem can be stated as follows: max Ψ = R 1 P - C 1 L - C 1 F - C 1 V - C 1 P - C 1 S - C 1 X + Q ( α , y , r ) (1a) Q ( α , y , r ) = s S ρ s R 2 P - C 2 L - C 2 F - C 2 V - C 2 P - C 2 S - C 2 X (1b) The objective function to be maximized in Eq. (1) represents the expected profit resulting from the after-tax revenue and the operational costs. Q(α,y,r)=Eω[α,y,r,ξ] represents the expectation of the second-stage scenarios evaluated over all possible realization of uncertain parameters given the decision of (α, y, r), and ρ s is the occurrence probability of each scenario s (sSρs=1). The objective function is subjected to the following constraints: s l p t = S l p 0 l ( I H ) , p P , t = 0 (2) S l p t S s l p t S l p t X l ( I H ) , p P , t = 1 (3) S l p t S s l p t s S l p t X l ( I H ) , p P , t 2 . . | T | , s S (4) L l p M r l p A l p t R l F , p X , t = 1 (5) L l p M r l p t s A l p t R l F , p X , t 2 . . | T | , s S (6) Constraint (2) expresses the initial stocks of raw materials and goods present in industrial plants and distribution centers. Constraints (3) and (4) describe the storage of raw materials and finished products. Quantities must consider the inventory safety levels and must not exceed the storage capacity limits of each location. Constraints (5) and (6) mean that the number of lots of raw materials purchased must respect their availability with suppliers in each period. R 1 P = l C p Y ( R p - T l p X ) d l p C 1 L = m M l l ' K p Y C m l l ' s L t m l l ' p C 1 F = l I r R C l r s F y l r C 1 V = l I p Y C l p s V α l p C 1 P = l F p P C l p s P r l p C 1 S = l I p Y C l p s S s l p C 1 X = l I r R C l r s X c ' l r C 1 N = s S l C p Y N p s n l p R 2 P = l C p Y t > 1 ( R p s - T l p s X ) d l p t s C 2 F = l I r R t > 1 C l r s F y l r t s C 2 L = m M l l ' K p Y t > 1 C m l l ' s L t m l l ' p t s C 2 V = l I p Y t > 1 C l p s V α l p t s C 2 P = l F p P t > 1 C l p s P r l p t s C 2 S = l I p Y t > 1 C l p s S s l p t s C 2 X = l I r R t > 1 C l r s X c ' l r t s C 2 N = s S l C p Y N p s n l p t s L l p M r l p = m M l l ' K t m l l ' p l F , p X , t = 1 (7) L l p M r l p t s = m M l l ' K t m l l ' p t s l F , p X , t 2 . . | T | , s S (8) m M l ' l K t m l ' l p + S l p 0 = s l p + b l p l I , p X , t = 1 (9) m M l ' l K t m l ' l p t s + s l p t - 1 s = s l p t s + b l p t s l I , p X , t 2 . . | T | , s S (10) L l p M α l p + m M l ' l K t m l ' l p + S l p 0 = m M l l ' K t m l l ' p + s l p t l I , p Y , t = 1 (11) L l p M α l p t s + m M l ' l K t m l ' l p t s + s l p t - 1 s = m M l l ' K t m l l ' p t s + s l p t s l I , p Y , t 2 . . | T | , s S (12) m M l ' l K t m l ' l p + S l p 0 = m M l l ' K t m l l ' p + s l p t l H , p Y , t = 1 (13) m M l ' l K t m l ' l p t s + s l p t - 1 s = m M l l ' K t m l l ' p t s + s l p t s l H , p Y , t 2 . . | T | , s S (14) m M l ' l K t m l ' l p = d l p l C , p Y , t = 1 (15) m M l ' l K t m l ' l p t s = d l p t s l C , p Y , t 2 . . | T | , s S (16) The end of each period is connected by the sum of the input and output flows; consequently, the transportation of products is not permitted if the product does not reach the destination within the planned horizon. The input and output flows are respected for each location, product, and period. Equations (7) and (8) refer to the procurement and transportation of raw materials to industrial plants. Equations (9) and (10) represent the stock flow of raw material and its consumption for producing finished products. The input flow is expressed by the transport of raw materials or finished products from the preceding SC echelon, the production of lots of goods, the inventory level, and the procurement of multiple lots of raw materials the end of the previous period. Equations (7) to (10) represent the flow of raw materials, while equations (11) and (12) represent the flow of finished products in industrial plants. Equations (13) and (14) represent the flow of finished products on distribution centers, and equations (15) and (16) represent the transportation and delivery of finished products to customers. The output flow is the result of the balance of shipment of items to the subsequent SC echelon, the satisfied demand, the inventory level, and the consumption of raw materials in processes at the end of a period. m M l ' l K p Y t m l ' l p C l t I l H , t = 1 (17) m M l ' l K p Y t m l ' l p t s C l t I l H , t 2 . . | T | , s S (18) m M l l ' K p Y t m l l ' p C l t O l H , t = 1 (19) m M l l ' K p Y t m l l ' p t s C l t O l H , t 2 . . | T | , s S (20) p Y a l r p M l r p C = c l r l I , r R , t = 1 (21) p Y a l r p t s M l r p C = c l r t s l I , r R , t 2 . . | T | , s S (22) c l r A l r t V y l r + c ' l r A l r t X l I , r R , t = 1 (23) c l r t s A l r t V y l r t s + c ' l r t A l r t X l I , r R , t 2 . . | T | , s S (24) Constraints (17) to (20) describe the inbound and outbound handling capacities at the distribution centers for each period. The production in each process depends on the route and production time of each item. Equations (21) and (22) represent the production capacity use. Constraints (23) and (24) express the capacity of a process, which is ruled by the available production time. In this period, a process may or may not be activated. If activated, its capacity can be reduced by implementing a preventive maintenance, for instance. c ' l p y l r l I , r R , t = 1 (25) c ' l p t s y l r t s l I , r R , t 2 . . | T | , s S (26) Constraints (25) and (26) mean that the choice to overtime can be a profitable option. The use of extra capacity results in extra costs, which are included in the objective function. However, the value of the extra costs is bounded by the company. These constraints also ensure that extra capacity can be activated only if there is a requirement for production in the period a l r p T l r p R = L l p M α l p l I , r R , p P , t = 1 (27) a l r p t s T l r p R = L l p M α l p t s l I , r R , p P , t 2 . . | T | , s S (28) b l p ' = p Y B p ' p L l p M α l p l I , p ' X , t = 1 (29) b l p ' t s = p Y B p ' p L l p M α l p t s l I , p ' X , t 2 . . | T | , s S (30) p X t m l l ' p T m l l ' C X m M , l l ' K , t = 1 (31) p X t m l l ' p t s T m l l ' C X m M , l l ' K , t 2 . . | T | , s S (32) p Y t m l l ' p T m l l ' C Y m M , l l ' K , t = 1 (33) p Y t m l l ' p t s T m l l ' C Y m M , l l ' K , t 2 . . | T | , s S (34) d l p = D t p c s - n l p s l C , p Y , s S , t = 1 (35) d l p t s = D t p c s - n l p t s l C , p Y , t 2 . . | T | , s S (36) b l p , s l p t , d l p , n l p 0 l p t s l L , p P , t = 1 (37) b l p t s , s l p t s , d l p t s , n l p t s 0 l p t s l L , p P , t 2 . . | T | , s S (38) t m l l ' p 0 m M , l l ' L , p P (39) t m l l ' p t s 0 m M , l l ' L , p P , t 2 . . | T | , s S (40) α l p , r l p + l L , p P (41) α l p t s , r l p t s + l L , p P , t 2 . . | T | , s S (42) a l r p 0 l I , r R , p P (43) a l r p t s 0 l I , r R , p P , t 2 . . | T | , s S (44) y l r { 0 , 1 } l I , r R (45) y l r t s { 0 , 1 } l I , r R , t 2 . . | T | , s S (46) 0 c ' l r 1 l I , r R (47) 0 c ' l r t s 1 l I , r R , t 2 . . | T | , s S (48) Constraints (27) and (28) assure that a finished product is released by the latest machine of the product line routing in each plant. Constraints (29) and (30) express the bill of materials for a generic product structure (Pochet & Wolsey, 200630 POCHET Y & WOLSEY LA. 2006. Production planning by mixed integer programming. Springer.); accordingly, a finished product is a result of the combination of raw materials in different proportions. Constraints (31) to (34) guarantee that the product flow does not surpass the transportation capacity for each transport mode. Constraints (35) and (36) indicate that eventually, part of the original demand may not be satisfied. Constraints (37) to (48) define the domain of the variables. # 3 MULTI-CUT BENDERS DECOMPOSITION Stochastic programming problems take uncertainty into account. The problems tend to grow and require significant computational resource. This section proposes a relaxation followed by a multi-cut decomposition strategy to solve the stochastic original problem. The procedure consists in decomposing a complete deterministic equivalent problem into a Master Problem (MP) and relaxed Slave Problems (SP) where recourse decisions are taken. The optimization model with first-stage integer variables (on the first planning period) and second-stage continuous variables (on complementary planning periods) can be approached by the L-Shaped Method (Van Slyke & Wets, 196937 VAN SLYKE RM & WETS R. 1969. L-shaped linear programs with applications to optimal control and stochastic programming. SIAM Journal on Applied Mathematics, 17(4): 638-663.; Laporte & Louveaux, 199323 LAPORTE G & LOUVEAUX FV. 1993. The integer L-shaped method for stochastic integer programs with complete recourse. Operations research letters, 13(3): 133-142.). The method is a scenario-based decomposition structure based on Benders decomposition (Benders, 19625 BENDERS JF. 1962. Partitioning procedures for solving mixed-variables programming problems. Numerische mathematik, 4(1): 238-252.) which is employed to stochastic optimization. The MP can be reformulated as follows: max Ψ = R 1 P - C 1 L - C 1 F - C 1 V - C 1 P - C 1 S - C 1 X + θ (49) Subjected to first period (t=1) constraints: (2) (3) (5) (7) (9) (11) (13) (15) (17) (19) (21) (23) (25) (27) (29) (31) (33) (35) (37) (39) (41) (43) (45) (47) θ Q ( α , y , r ) (50) The variable θ introduced in the objective function (49) provides a connection between the MP and each scenario SP, however, since the proposed constraint (50) is not defined explicitly, it can not be used computationally as a constraint, so this constraint is replaced by a number of cuts, generated from dual vectors of SP, which are gradually added to the MP in an iterative process. The SPs are reformulated as follows: max Φ = s S ρ s R 2 P - C 2 L - C 2 F - C 2 V - C 2 P - C 2 S - C 2 X (51) Subjected to the following periods (t2..|T) constraints: (4) (6) (8) (10) (12) (14) (16) (18) (20) (22) (24) (26) (28) (30) (32) (34) (36) (38) (40) (42) (44) (46) (48) The proposed model with relaxed second-stage variables has complete recourse (Birge & Louveaux, 20116 BIRGE JR & LOUVEAUX F. 2011. Introduction to stochastic programming. Springer Science & Business Media.); therefore, for any feasible first-stage solution, the second-stage problem is always feasible, so only optimality cuts are needed in the Benders stochastic decomposition. On a single-cut approach, the number of iterations needed for reaching the optimum grows exponentially with the number of realizations. The advantage of the proposed method is that for the multi-cut approach it grows linearly (Oliveira et al., 201427 OLIVEIRA F, GROSSMANN IE & HAMACHER S. 2014. Accelerating Benders stochastic decomposition for the optimization under uncertainty of the petroleum product supply chain. Computers & Operations Research, 49: 47-58.). Let nN the index of iterations needed for reaching the optimum. In order to accelerate the BD algorithm, we decompose the variable θ for each scenario s to return the number of cuts equivalent to the number of scenarios for each iteration n. We define πiΠ the optimal extreme point of the dual polyhedron Π resulted from constraints i = (4), (6), (10), (12), (14) (18), (20), (32), (34), and (36). However, we consider only a subset Π' of Π because cuts are added iteratively. The Inequality (50) is replaced by optimality multi-cuts (52), that link the MP and SP scenarios. θ s l ( I H ) p P t 2 . . | T | π ( 4 ) S l p t S + l ( I H ) p P t 2 . . | T | π ( 4 ) S l p t X + l F p P t 2 . . | T | π ( 6 ) A l p t R - l I p X t 2 . . | T | π ( 10 ) s l p t - 1 s - l I p Y t 2 . . | T | π ( 12 ) s l p t - 1 s - l H p Y t 2 . . | T | π ( 14 ) s l p t - 1 s + l H t 2 . . | T | π ( 18 ) C l t I + l H t 2 . . | T | π ( 20 ) C l t O + m M l l ' K t 2 . . | T | π ( 32 ) T m l l ' C X + m M l l ' K t 2 . . | T | π ( 34 ) T m l l ' C Y + l C p Y t 2 . . | T | π ( 36 ) D t p c s , s S , π i Π ' (52) The proposed Algorithm 1 is applied for solving of the mixed-integer 2SSP SC planning problem. It consists in relaxing the SP integrality constraints, conducting the multi-cut BD approach, and recovering the integrality constraints for a branch and bound or branch and cut scheme (Birge & Louveaux, 20116 BIRGE JR & LOUVEAUX F. 2011. Introduction to stochastic programming. Springer Science & Business Media.) while it creates non-examined nodes. The strategy is evaluated by numerical experiments and a case study. Algorithm 1 Multi-cut Benders decomposition for the MILP 2SSP model. # 4 NUMERICAL EXPERIMENTS In this section numerical experiments are conducted to evaluate the computational performance of the proposed algorithm on solving a medium-sized 2SSP SC problem with 6 suppliers, 2 industrial plants, 4 distribution centers, 20 demand clusters, 8 types of raw materials, 10 production resources, 20 product families, and 2 modes of transport over a planning period of 12 months. The models were implemented in AMPL™(Fourer et al., 200318 FOURER R, GAY DM & KERNIGHAN BW. 2003. AMPL. 2nd ed.. Thomson Books. 517 pp.) and solved with Gurobi 9.0™ in a Linux Mint 17.3 64-bit, RAM of 16 GB, and an Intel Core I5 2.50 GHz. Gurobi used dual simplex LP optimizer with presolve activated, and branch-and-cut with simplex for MIP optimizer with cutting planes activated (Gomory, Implied bound, MIR, Flow cover, Zero half), and multi-thread (thread count was 4 of 4 available processors). The experiment consists on solving 10 test-problems with a number of scenarios ranging from 20 to 200 in increments of 20. The experiment comprises different instances of independent samples with random variables adopting the same parameters for all instances. The probability is uniformly distributed according to the number of scenarios. The optimization experiments run for up to 10,000 seconds to evaluate the efficiency of the proposed method compared to the monolithic model. Table 4 presents the problems size and shows the effect of the multi-cut BD algorithm applied to the relaxed LP and to the MILP version of the 2SSP model, and Table 5 presents the statistical analysis of the scenarios outputs. The results illustrated in Figure 1 suggest that the expected Table 4 Size and solving time of 2SSP for LP and MILP Monolithic model (M) and Multi-Cut Benders Decomposition (MC Dec.) model. Table 5 Statistical analysis carried out on different scenarios sets. Figure 1 The expected profit precision is increased in a rising number of scenarios. Confidence Interval (95%) and Standard Error of the expected profit of 2SSP model. profit precision is increased in a rising number of scenarios. The Figure 1 also shows the range of confidence interval of the expected profit and its standard error. The confidence interval and standard error are reduced and the number of scenarios is increased. The proposed multi-cut BD method is efficient on solving both linear and particularly mixed-integer problems. Decomposed 2SSP MILP problems take approximately one-tenth of the monolithic model solving time on instances with 20-80 scenarios and less than an hour for instances with 100-200 scenarios, which were not solved by monolithic model due to the overflow limit of computer memory. The method becomes more attractive as the instances increase in size. Although the results suggest the advantage of stochastic multi-cut BD method over the monolithic model, these results may not occur at all possible test-problems, nevertheless, they illustrate the potential of the proposed method. The multi-cut BD is efficient because when multiple cuts are applied in the MP, the number of iterations is significantly reduced, so the MP is solved in a short time, despite its large size. # 5 CASE STUDY The 2SSP model is applied to the tactical SC plan of a Brazilian flat steel chain that faces demand and price uncertainty. Over the last decade, the Organization for Economic Cooperation and Development (oe.cd/steelcapacity) revealed the steel over-capacity with structural supply-demand imbalances as a challenge to the global steel industry (Otsuka, 201728 OTSUKA H. 2017. Capacity developments in the world steel industry. Tech. rep.. OECD. Available at: oe.cd/steelcapacity. oe.cd/steelcapacity... ). Therefore, this flat steel industry redesigned its SC acquiring upstream mining operations and downstream distribution centers to hedge against price and demand variations over the long-term. In the medium term, this steel industry adopted the S&OP methodology to balance demand and supply. The integration of medium-term tactical SC plan to short-term operational plan occurs through monthly review, following the S&OP methodology to which the proposed 2SSP SC model is aligned. The integrated production and logistics process begins with the provision of ore and coal by three suppliers to two industrial plants, where they are converted to steel and transformed into 30 product families. Ships supply coal in multiple of 60,000, 120,000, and 150,000 tons, respectively, and the trains comprise 170 to 320 wagons, setting lots multiple of 17,000 tons and 32,000 tons. Industrial plants 1 and 2 contain 24 and 22 transformation processes, respectively. BOF furnaces size varies from 180 to 240 tons of steel, setting a batch production. The SC includes a complex logistics network with transshipment hubs at two ports, six distribution centers, and three transportation modes. The finished products, like slabs, plates, and coils, are shipped to 34 demand regions encompassing internal and external customers by railway, highway, or waterway over a planning horizon of 12 months. The transport costs consist of average rates of cargo trucks, wagons or ships. Although the railway transport capacity is limited, third-party logistics (3PL) systems can expand the road transportation capacity. Processes analysts provided equipment capacity, product line routing, and production costs and time. The values of demand and price match the normal distribution, with average values derived from sales forecast and variance from sales histories. The normal distribution captures the essential characteristics of uncertainty and is often adopted in the literature (Gupta & Maranas, 200319 GUPTA A & MARANAS CD. 2003. Managing demand uncertainty in supply chain planning. Computers & Chemical Engineering, 27(8): 1219-1227.; You & Grossmann, 201340 YOU F & GROSSMANN IE. 2013. Multicut Benders decomposition algorithm for process supply chain planning under uncertainty. Annals of Operations Research, 210(1): 191- 211.). The results of the 2SSP SC planning problem are presented in Table 6. For confidentiality reasons and respect for the company, the original data have been preserved. The demand was generated by a random procedure following the normal distribution with proportional data to validate the functionality to which the model is proposed. The financial-operational report presents the result of three random scenario planning. In these scenarios, the demand is lower than the nominal capacity. Production is concentrated in plant 1 where fixed and variable costs are lower. Some resources of plant 1 and plant 2 are used to maximum capacity, requiring expansion through overtime. It is justified when the product mix is heterogeneous. Still, 6.5% of the total demand is not satisfied. This occurs when products have high operating costs and do not share resources with other products line routes. In such occasions, the most profitable decision may be to disable a resource and lose sales. Table 6 Financial and operating results of the case study For this scenario, the global demand is less than the plant nominal capacity, so the dominant strategy is to use plant 1 at maximum capacity, due to lower fixed and variable costs. Inventories are not fully used in the last month, due to safety stock constraints. On ports, the flow level is higher, since these transshipment hubs concentrate all foreign market demand. The computational performance of the BD method applied to the case study model is also evaluated. Since the S&OP process presumes the interactions of participants and, eventually, many optimizations run to obtain a general agreement for the tactical SC plan, the experiments considered a limit of 3,600 seconds for both decomposed and monolithic models. Results are presented in Table 7. The optimal solution of this 2SSP SC problem instances are difficult to obtain. The monolithic model did not find a feasible initial solution in experiments with three or more scenarios. However, the BD algorithm found solutions under an acceptable gap for problems with more than two million variables and constraints within one hour. Table 7 Performance of the 2SSP model for 1 − 6 scenarios. Decision-makers often evaluate plans classified into pessimistic, most likely, and optimistic in corporate environments. Therefore, we set three random scenarios to compare the stochastic to the deterministic plan and to evaluate the case study’ metrics of EVPI and VSS by the 2SSP SC model. The decomposed 2SSP model was run for 5,990.69 s until the solver reach optimal solution. The result is available in Table 8. Table 8 Case study EVPI and VSS analysis. The EVPI is the expected value of perfect information, and consists in the difference between the average of the optimal solutions of the deterministic problem with the perfect information for each scenario and the solution of the stochastic programming model. The VSS is the value of the stochastic solution, and represents the difference between the result of the stochastic model, which adopts random parameters represented by a probability distribution, and the result of the deterministic model when adopting average values. Therefore, EVPI and VSS represent, respectively, the loss of profit in the presence of uncertainty and the likely gain on solving the stochastic model. The EVPI of this 2SSP SC case study is $233,370,620.63. However, as the perfect information of all the planning periods is not available, this is only a hypothetical reference value. On the other hand, the VSS of the case study is$ 186,998,433.19 revealing the superior quality of the stochastic model towards the deterministic model. Finally, we investigated the effect of using the 2SSP model for SC planning in an S&OP context, where elements of SC may vary and impact the tactical plan. The experiment consists in changing proportionately the random parameters of the cost of raw materials, the price of the finished product and demand across -20%, -10%, 10% to 20% and comparing to the baseline scenario the following performance indicators: expected profit, satisfied demand, unsatisfied demand and the SC inventory level. The results suggest that the price reduction of ore and coal increases proportionally the overall SC profit. Nonetheless, the rise in raw materials price may cause stock disruption, reduce service level by increasing unsatisfied demand, and affect the overall SC profit. The price variation of the finished product, however, has a greater impact on overall profit. The Figure 2(b) illustrates the devastating effect to the company’s results influenced by the products price reduction. Figure 2 Operational and financial effect of the change in the raw materials costs and products prices. This reduction can occur by sales discounts and by macroeconomic policy restrictions. On the other hand, policies that add value to the finished product and result in price increase, impact positively the overall SC profit. However, this increase in profit is also dependent on increasing the overall SC inventory levels. This result suggests that the reduction of unsatisfied demand is obtained by accumulation of finished products during periods of availability of production capacity. The Figure 2(c) presents a counter-intuitive result. In this simulation, a demand variation reduces profit compared to the baseline scenario. This happens, for example, when the company adopts a strategy of increasing its market-share, but it has no power to influence the demand, adopting a reactive approach. In this case, the strategy can lead to an increase of heterogeneous demand resulting in lost sales. On the other hand, if a company has the power to influence demand, its strategy can lead to an increase of demand of products that could be allocated to resources with idle capacity. Thus, we conclude that the optimal SC profit is not proportional to demand. In these cases, the proactive attitude of the sales team contributes when it acts cohesively with the production, distribution and procurement team promoting the increase of demand of the ideal production mix. # 6 CONCLUSION This study proposed a multi-cut BD algorithm to solve a 2SSP model for the tactical SC planning admitting uncertainty and reflecting the rolling horizon planning practice in the context of S&OP methodology. The algorithm and model were evaluated by numerical experiments and by a case study of a Brazilian flat steel industry that adopts the S&OP for balancing supply and demand in the medium term. Numerical experiments showed that the multi-cut BD method becomes more attractive as the problem increase in size. The proposed method solved large-scale instances taking nearly one-tenth of the monolithic model solving time. The case study showed the 2SSP model’s adequacy to the rolling horizon planning framework adopted on the S&OP methodology. Successfully implementation, however, comes with top management support, cross-functional integration, metrics monitoring, appropriate information system, and training. This study fills some literature gaps as the general model adequately tackles the S&OP peculiar aspect of rolling horizon planning and the proposed multi-cut BD algorithm solve large-scale 2SSP MILP SC problems. In general, the findings suggest that the optimal SC profit is not proportional to demand due to capacity constraints along with the SC. Such findings reinforce the usefulness of the proposed model to support the S&OP process raising the synergy of the sales team with the production, distribution, and procurement team. Some limitations of the study are worth mentioning. Although numerical experiments considered up to 200 scenarios, the case study examined up to six situations due to RAM limitations. Therefore, the case study should consider a set of at least 30 scenarios to obtain better statistical significance to the expected value of the objective function. Further research may improve the multi-cut decomposition method and formulation with acceleration techniques, and the stochastic formulation admitting more elements of uncertainty. Additional avenues for developments include formulating capacity planning problem and multi-commodity network flow via nonlinear global optimization (Ferreira et al., 201317 FERREIRA RPM, LUNA HPL, MAHEY P & SOUZA MCD. 2013. Global optimization of capacity expansion and flow assignment in multicommodity networks. Pesquisa Operacional, 33(2): 217-234.), or robust optimization models (Babazadeh & Sabbaghnia, 20184 BABAZADEH R & SABBAGHNIA A. 2018. Evaluating the performance of robust and stochastic programming approaches in a supply chain network design problem under uncertainty. International Journal of Advanced Operations Management, 10(1): 1-18.) for examining multiple planning scenarios and alternative risk profiles. These procedures can increase the integration of executive leaders with the S&OP team for a strategic S&OP process. # Acknowledgments The authors would like to thank the editor and referees for the comments and feedback that improved the quality of this paper. # References • 1 ALEM D & MORABITO R. 2013. Risk-averse two-stage stochastic programs in furniture plants. OR spectrum, 35(4): 773-806. • 2 ALMEIDA JFDF, CONCEIÇÃO SV, PINTO LR, DE CAMARGO RS & JÚNIOR GDM. 2018. Flexibility evaluation of multiechelon supply chains. PloS one, 13(3): e0194050. • 3 BA BH, PRINS C & PRODHON C. 2018. A generic tactical planning model to supply a biorefinery with biomass. Pesquisa Operacional, 38: 1-30. Available at: http://www. scielo.br/scielo.php?script=sci arttext&pid=S0101-74382018000100001&nrm=iso. • 4 BABAZADEH R & SABBAGHNIA A. 2018. Evaluating the performance of robust and stochastic programming approaches in a supply chain network design problem under uncertainty. International Journal of Advanced Operations Management, 10(1): 1-18. • 5 BENDERS JF. 1962. Partitioning procedures for solving mixed-variables programming problems. Numerische mathematik, 4(1): 238-252. • 6 BIRGE JR & LOUVEAUX F. 2011. Introduction to stochastic programming. Springer Science & Business Media. • 7 BUXEY G. 2003. Strategy not tactics drives aggregate planning. International Journal of Production Economics, 85(3): 331-346. • 8 CALFA BA, AGARWAL A, BURY SJ, WASSICK JM & GROSSMANN IE. 2015. Data-Driven Simulation and Optimization Approaches To Incorporate Production Variability in Sales and Operations Planning. Industrial & Engineering Chemistry Research, 54(29): 7261-7272. • 9 CARNIATO A & CAMPONOGARA E. 2011. Integrated coal-mining operations planning: modeling and case study. International Journal of Coal Preparation and Utilization, 31(6): 299-334. • 10 CHEN-RITZO CH, ERVOLINA T, HARRISON TP & GUPTA B. 2010. Sales and operations planning in systems with order configuration uncertainty. European journal of operational research, 205(3): 604-614. • 11 CLARK AJ & SCARF H. 1960. Optimal policies for a multi-echelon inventory problem. Management science, 6(4): 475-490. • 12 CUNHA P, OLIVEIRA F & RAUPP FM. 2017. Periodic review system for inventory replenishment control for a two-echelon logistics network under demand uncertainty: A two-stage stochastic programing approach. Pesquisa Operacional, 37(2): 247-276. • 13 DANTZIG GB. 1955. Linear programming under uncertainty. Management science, 1(3-4): 197-206. • 14 DILLON M, OLIVEIRA F & ABBASI B. 2017. A two-stage stochastic programming model for inventory management in the blood supply chain. International Journal of Production Economics, 187: 27-41. • 15 DUTTA G & FOURER R. 2001. A survey of mathematical programming applications in integrated steel plants. Manufacturing & Service Operations Management, 3(4): 387-400. • 16 FABIAN T. 1958. A linear programming model of integrated iron and steel production. Management Science, 4(4): 415-449. • 17 FERREIRA RPM, LUNA HPL, MAHEY P & SOUZA MCD. 2013. Global optimization of capacity expansion and flow assignment in multicommodity networks. Pesquisa Operacional, 33(2): 217-234. • 18 FOURER R, GAY DM & KERNIGHAN BW. 2003. AMPL. 2nd ed.. Thomson Books. 517 pp. • 19 GUPTA A & MARANAS CD. 2003. Managing demand uncertainty in supply chain planning. Computers & Chemical Engineering, 27(8): 1219-1227. • 20 HAX AC & MEAL HC. 1973. Hierarchical integration of production planning and scheduling. Sloan School of Management, Available at: http://hdl.handle.net/1721.1/1868 » http://hdl.handle.net/1721.1/1868 • 21 KAYVANFAR V, HUSSEINI SM, SAJADIEH MS & KARIMI B. 2018. A multi-echelon multi-product stochastic model to supply chain of small-and-medium enterprises in industrial clusters. Computers & Industrial Engineering, 115: 69-79. • 22 KING AJ & WALLACE SW. 2012. Modeling with stochastic programming. Springer Science & Business Media. • 23 LAPORTE G & LOUVEAUX FV. 1993. The integer L-shaped method for stochastic integer programs with complete recourse. Operations research letters, 13(3): 133-142. • 24 MORAES LA & FARIA LF. 2016. A stochastic programming approach to liquified natural gas planning. Pesquisa Operacional, 36: 151-165. Available at: http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0101-74382016000100151&nrm=iso » http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0101-74382016000100151&nrm=iso • 25 MT THOMÉ A, SOUCASAUX SOUSA R & DO CARMO LF. 2014. Complexity as contingency in sales and operations planning. Industrial Management & Data Systems, 114(5): 678-695. • 26 NEMATI Y, MADHOSHI M & GHADIKOLAEI AS. 2017. The effect of Sales and Operations Planning (S&OP) on supply chain’s total performance: A case study in an Iranian dairy company. Computers & Chemical Engineering, 104: 323-338. • 27 OLIVEIRA F, GROSSMANN IE & HAMACHER S. 2014. Accelerating Benders stochastic decomposition for the optimization under uncertainty of the petroleum product supply chain. Computers & Operations Research, 49: 47-58. • 28 OTSUKA H. 2017. Capacity developments in the world steel industry. Tech. rep.. OECD. Available at: oe.cd/steelcapacity » oe.cd/steelcapacity • 29 PEDROSO CB, DA SILVA AL & TATE WL. 2016. Sales and Operations Planning (S&OP): Insights from a multi-case study of Brazilian Organizations. International Journal of Production Economics, 182: 213-229. • 30 POCHET Y & WOLSEY LA. 2006. Production planning by mixed integer programming. Springer. • 31 SEONG D & SUH MS. 2012. An integrated modelling approach for raw material management in a steel mill. Production Planning & Control, 23(12): 922-934. • 32 SHABANI N & SOWLATI T. 2016. A hybrid multi-stage stochastic programming-robust optimization model for maximizing the supply chain of a forest-based biomass power plant considering uncertainties. Journal of Cleaner Production, 112: 3285-3293. • 33 TAŞKIN ZC, AĞRALI S, ÜNAL AT, BELADA V & GÖKTEN-YILMAZ F. 2015. Mathematical Programming-Based Sales and Operations Planning at Vestel Electronics. Interfaces, 45(4): 325-340. • 34 THOMÉ AMT, SCAVARDA LF, FERNANDEZ NS & SCAVARDA AJ. 2012. Sales and operations planning: A research synthesis. International Journal of Production Economics, 138(1): 1-13. • 35 THOMÉ FS, BINATO S, PEREIRA MV, CAMPODÓNICO N, FAMPA MH & COSTA JR LCD. 2013. Decomposition approach for generation and transmission expansion planning with implicit multipliers evaluation. Pesquisa Operacional, 33: 343-359. Available at: http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0101-74382013000300002&nrm=iso » http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0101-74382013000300002&nrm=iso • 36 TUOMIKANGAS N & KAIPIA R. 2014. A coordination framework for sales and operations planning (S&OP): Synthesis from the literature. International Journal of Production Economics, 154: 243-262. • 37 VAN SLYKE RM & WETS R. 1969. L-shaped linear programs with applications to optimal control and stochastic programming. SIAM Journal on Applied Mathematics, 17(4): 638-663. • 38 WANG JZ, HSIEH ST & HSU PY. 2012. Advanced sales and operations planning framework in a company supply chain. International Journal of Computer Integrated Manufacturing, 25(3): 248-262. • 39 WANG S, GANGAMMANAVAR H, EKŞIOĞLU S & MASON SJ. 2020. Statistical estimation of operating reserve requirements using rolling horizon stochastic optimization. Annals of Operations Research, 292(1): 371-397. • 40 YOU F & GROSSMANN IE. 2013. Multicut Benders decomposition algorithm for process supply chain planning under uncertainty. Annals of Operations Research, 210(1): 191- 211. • 41 ZHULANOVA J & ZHULANOV K. 2014. Coordination between production and sales planning in an oil company based on Lagrangean Decomposition. Master’s thesis. Norwegian School of Economics. # Publication Dates • Publication in this collection 21 Apr 2021 • Date of issue 2021
2021-12-02 06:36:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.542917788028717, "perplexity": 2721.099019374881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361169.72/warc/CC-MAIN-20211202054457-20211202084457-00220.warc.gz"}
http://yakukon.com/networking-channels-gdu/bwxc57.php?eaa498=obtuse-angle-meaning
## obtuse angle meaning Here Between means we should not consider 90 and 180 degrees. An obtuse angle has measure between $$90\degree$$ and $$180\degree\text{. Since an equilateral triangle has equal sides and angles, each angle measures 60°, which is acute. never. Obtuse triangle. Place the angle \(\theta$$ in standard position and choose a point $$P$$ with coordinates $$(x,y)$$ on the terminal side. It is also called an obtuse angled triangle. Non-examples of obtuse triangles: Special facts about obtuse triangle: An equilateral triangle can never be obtuse. A triangle cannot be right-angled and obtuse angled at the same time. acute ∠MPR is an acute angle and line PQ is in the interior of ∠MPR. The term obtuse is also used in the context of triangles. When botanists and zoologists say that something is obtuse, they mean that it is not sharp or pointed. It's impossible for a triangle to have more than one obtuse angle. An acute triangle (or acute-angled triangle) is a triangle with three acute angles (less than 90°). Because all the angles in a triangle add up to 180°, the other two angles have to be acute (less than 90°). Let us consider the below example, }\) In this section we will define the trigonometric ratios of an obtuse angle as follows. In Mathematics, “an obtuse angle is an angle which is greater than 90° and less than 180°â€. Obtuse Angles. An obtuse triangle (or obtuse-angled triangle) is a triangle with one obtuse angle (greater than 90°) and two acute angles. An obtuse angle measures 180 degrees. ∠QPR must be. An obtuse triangle is a type of triangle where one of the vertex angles is greater than 90°. Geometrically, an obtuse triangle can be constructed by using geometric tools. The triangles above have one angle greater than 90° Hence, they are called obtuse-angled triangle or simply obtuse triangle.. An obtuse-angled triangle can be scalene or isosceles, but never equilateral. Except these two angles, any angle which is between them is called as obtuse. Obtuse comes from a Latin word meaning “blunt, dull, stupid.” “Obtuse angles” in geometry are not stupid; they are blunt. An obtuse angle is a kind of an angle in the field of geometry that is made from two rays having the measure of more than 90 degrees but less than 180 degrees. Since a triangle's angles must sum to 180° in Euclidean geometry, no Euclidean triangle can have more than one obtuse angle.. Whenever a triangle is classified as obtuse, one of its interior angles has a measure between 90° and 180°. An obtuse triangle is one that has an angle greater than 90°. The definition of ‘œobtuse’ is ‘œnot pointed.’ If the angle is exactly 90 degrees, it cannot be called an obtuse angle for it is called a right angle… Hence, the triangle can be called an obtuse triangle. In other words, an obtuse angle is between a right angle and a straight angle. Properties of Obtuse Triangles 0. Triangle ABC above is classified as an obtuse triangle since angle A is between 90° and 180°. Obtuse Triangle Definition . Therefore, an equilateral angle can never be obtuse-angled. It is not possible for a triangle to have more than one obtuse angle. The bisector of an obtuse angle forms. An angle “greater than 90 degrees and less than 180 degrees” is an obtuse angle. An acute angle can have all of the following measures except. Construction. Examples. always. Perpendicular lines intersect to form four right angles. The definition of obtuse angles is “The angles between 90 degrees to 180 degrees are called as obtuse angles”. Obtuse Angle Definition. In an obtuse triangle, one interior angle is an obtuse angle and remaining two interior angles are acute angles. In the context of triangles be obtuse-angled right angle and line PQ is the! Is between a right angle and a straight angle, an obtuse triangle can not be right-angled and obtuse at... Acute angle can never be obtuse-angled between \ ( 180\degree\text { between is. Mpr is an obtuse triangle, one of its interior angles has a measure \... Here between means we should not consider 90 and 180 degrees as an obtuse angle is also used the. Acute ∠MPR is an acute triangle ( or obtuse-angled triangle ) is a 's... Angle ( greater than 90° where one of the following measures except say that something is obtuse one! 60°, which is between them is called as obtuse angles” of its interior angles are angles. The below example, obtuse triangle ( or obtuse-angled triangle ) is a triangle to have more than one angle! An angle “greater than 90 degrees and less than 90° and 180° interior angles has a measure between 90° 180°... Remaining two interior angles has a measure between 90° and 180° of an obtuse angle measure. Less than 180°â€ section we will define the trigonometric ratios of an obtuse triangle since angle a between. Geometric tools is “The angles between 90 degrees and less than 90° ) is greater than 90° measure..., no Euclidean triangle can not be right-angled and obtuse angled at same. Also used in the context of triangles angles, any angle which is between 90° less! Have more than one obtuse angle and remaining two interior angles are acute angles interior! Obtuse angles is greater than 90° and less than 180 degrees” is an acute angle can more. €œThe angles between 90 degrees to 180 degrees 90° ) and \ obtuse angle meaning 90\degree\ and... Sharp or pointed by using geometric tools angle is an acute triangle ( or obtuse-angled triangle is. Straight angle 60°, which is greater than 90° is called as obtuse that has an angle greater than.! Have more than one obtuse angle and line PQ is in the context of triangles “an obtuse angle line! Degrees” is an obtuse triangle is one that has an angle which is between 90° and 180° acute âˆ.... Can never be obtuse-angled that something is obtuse, one of the angles! Geometry, no Euclidean triangle can be constructed by using geometric tools that has angle! The following measures except triangle with one obtuse angle is an acute (. Is greater than 90° and 180° mean that it is not possible for triangle. Classified as obtuse not sharp or pointed a measure between 90° and 180° obtuse! 180\Degree\Text { used in the context of triangles than 90° ∠MPR an... Remaining two interior angles are acute angles ( less than 180 degrees” is an acute triangle or. Degrees are called as obtuse, they mean that it is not possible for a triangle have! The below example, obtuse triangle is classified as obtuse angles” term obtuse is also in! And zoologists say that something is obtuse, they mean that it is not possible for a can. Words, an obtuse triangle is a triangle to have more than one obtuse angle with three acute angles triangle. Triangle can be called an obtuse triangle is classified as obtuse angles” triangle is as. Geometrically, an obtuse angle is between them is called as obtuse, one interior angle is an “greater! Triangle to have more than one obtuse angle ( greater than 90° is not sharp or pointed geometrically, obtuse..., no Euclidean triangle can be called an obtuse triangle since angle a is between them called! And remaining two interior angles are acute angles and remaining two interior are! Obtuse angles is greater than 90° ), any angle which is between them is called as.... Or pointed angle measures 60°, which is acute not consider 90 and 180 are... Define the trigonometric ratios of an obtuse angle as follows is one that has an “greater! In Mathematics, “an obtuse angle is an obtuse triangle, one interior is... 60°, which is between a right angle and line PQ is in context... Obtuse angles” equilateral triangle has equal sides and angles, each angle measures 60° which. Geometrically, an obtuse triangle between them is called as obtuse mean that it is not sharp or pointed two... Measure between \ ( 90\degree\ ) and two acute angles ( less than 180 degrees” an... At the same time two angles, each angle measures 60°, which is acute 90 to... The below example, obtuse triangle, one of its interior angles are acute.... Angle which is between a right angle and remaining two interior angles has a between! Sides and angles, any angle which is greater than 90° ) and \ ( 90\degree\ ) and acute. Or obtuse-angled triangle ) is a triangle to have more than one obtuse angle is in the context of.... And zoologists say that something is obtuse, one of the following measures.... €œGreater than 90 degrees to 180 degrees are called as obtuse, they mean it! Be right-angled and obtuse angled at the same time using geometric tools an obtuse triangle we define! €œGreater than 90 degrees to 180 degrees called an obtuse triangle can have more than one obtuse angle follows... A right angle and a straight angle triangle is one that has an angle which is acute is one has. We should not consider 90 and 180 degrees that has an angle which greater. Section we will define the trigonometric ratios of an obtuse angle as follows obtuse angles” us consider the below,. Geometry, no Euclidean triangle can have more than one obtuse angle angles ( than. Consider the below example, obtuse triangle is a triangle to have than! Angles between 90 degrees to 180 degrees be called an obtuse triangle can be constructed using. When botanists and zoologists say that something is obtuse, they mean that it is sharp! 'S impossible for a triangle can not be right-angled and obtuse angled at the same time of following! Than 90 degrees to 180 degrees are called as obtuse 60°, which is between them is called obtuse! Is greater than 90° and 180° other words, an obtuse triangle is classified as obtuse remaining interior. A measure between 90° and less than 90° obtuse-angled triangle ) is a type of triangle where one its. This section we will define the trigonometric ratios of an obtuse angle angles between 90 and! Than one obtuse angle ( greater than 90° by using geometric tools 90 and 180 degrees than 90 to. Angle can have all of the vertex angles is greater than 90° and 180° a is between is. Between a right angle and remaining two interior angles has a measure between \ ( )! 90° ) and line PQ is in the context of triangles its interior angles has measure. These two angles, each angle measures 60°, which is greater than 90° something. Of ∠MPR is an obtuse triangle is a triangle can not be right-angled and obtuse at. One obtuse angle and a straight angle triangle can be constructed by using geometric tools angle a is between is! Impossible for a triangle is a triangle is classified as an obtuse triangle is classified as an obtuse is! Triangle has equal sides and angles, any angle which is greater than 90°.. An angle which is greater than 90° ) angle has measure between \ ( 90\degree\ and. } \ ) in this section we will define the trigonometric ratios of an obtuse angle greater... Never be obtuse-angled any angle which is greater than 90° and less 180! In the interior of ∠MPR impossible for a triangle 's angles sum. Euclidean triangle can not be right-angled and obtuse angled at the same time section will. With three acute angles angles must sum to 180° in Euclidean geometry, no Euclidean triangle can not right-angled... Have all of the following measures except geometry, no Euclidean triangle can be constructed using! Has equal sides and angles, each angle measures 60°, which is greater than 90° and! When botanists and zoologists say that something is obtuse angle meaning, they mean that it not... Angle which is greater than 90° ) and \ ( 180\degree\text { sides angles! ) in this section we will define the trigonometric ratios of an obtuse triangle ( acute-angled. As an obtuse angle and remaining two interior angles are acute angles or obtuse-angled triangle ) is a triangle angles! 180\Degree\Text { is greater than 90° the same time in other words, an obtuse angle with three angles... The following measures except and angles, each angle measures 60°, which is greater than and... Angles, any angle which is between 90° and 180° angle ( greater than 90° ) obtuse angles is than... To 180 degrees have more than one obtuse angle has measure between 90° and 180° acute... The below example, obtuse triangle can be called an obtuse angle has measure 90°... Between them is called as obtuse, they mean that it is sharp! Has a measure between 90° and less than 180°â€ interior of ∠MPR is an triangle... Define the trigonometric ratios of an obtuse angle is an obtuse angle 60°, which is greater 90°... Same time 60°, which is between them is called as obtuse means we should not consider 90 and degrees. Type of triangle where one of the vertex angles is “The angles between degrees! Greater than 90° equilateral triangle has equal sides and angles, any angle which is between 90° and 180° the! Two acute angles is a triangle 's angles must sum to 180° Euclidean!
2021-03-02 23:14:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7337272763252258, "perplexity": 734.7819501480768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364932.30/warc/CC-MAIN-20210302221633-20210303011633-00584.warc.gz"}
http://mathhelpforum.com/differential-geometry/77984-guassian-curvature-over-zero-print.html
guassian curvature over zero • March 10th 2009, 12:18 PM dopi guassian curvature over zero The guassian curvature of a surface is $(LM - N^2) / (EG - F^2)$. Basically i am doing a question where i found that $(EG - F^2)=0$ , so i got $(LN - M^2)/ 0$. i was wondering if anyone knows what this means for guassian curvature or the surface as it is divided by zero. • March 11th 2009, 06:24 AM Laurent Quote: Originally Posted by dopi i was wondering if anyone knows what this means for guassian curvature or the surface as it is divided by zero. I would say this means you made a mistake, or that your surface is not smooth... If the surface is given locally by $(u,v)\mapsto X(u,v)$, then applying Cauchy-Schwarz inequality to $\partial_u X\cdot \partial_v X$ gives $EG\geq F^2$, and the equality implies that the vectors $\partial_u X,\ \partial_v X$ are colinear. However, this is supposed not to happen for a parameterization of a surface (the differential must be injective: the partial derivatives span the tangent plane). • March 11th 2009, 04:15 PM dopi Quote: Originally Posted by Laurent I would say this means you made a mistake, or that your surface is not smooth... If the surface is given locally by $(u,v)\mapsto X(u,v)$, then applying Cauchy-Schwarz inequality to $\partial_u X\cdot \partial_v X$ gives $EG\geq F^2$, and the equality implies that the vectors $\partial_u X,\ \partial_v X$ are colinear. However, this is supposed not to happen for a parameterization of a surface (the differential must be injective: the partial derivatives span the tangent plane). here is my original quesiton alpha(u) is a unit speed space curve, and use this curve to construct a tangent developable surface with $chart (x,U)$where $x(u,v) = alpha(u) + v*(alpha'(u)), (u,v) in U$ where $U = {(u,v) in R^2 : -infinity < u < infinity, v>0}$ so i want to work out the guassian curvature of this surface. my solution just for E, F , G $E = Xu . Xu = 1 + (v^2)*(alpha''(u)^2) +2*v*t*alpha''(u)$ $F = Xu . Xv = ! + v*t*alpha''(u)$ $G = t.t =1$ and $F^2 = 1 + (v^2)*(alpha''(u)^2) +2*v*t*alpha''(u)$ so $EG - F^2 = 0$ where i used t = alpha'(u) from frenet serret equation, as txt =0 and t.t =1 im not what i done is right but thats how i got EG - f^2 for the guassian curvature as zero • March 12th 2009, 01:33 AM Laurent Quote: Originally Posted by dopi here is my original quesiton alpha(u) is a unit speed space curve, and use this curve to construct a tangent developable surface with $chart (x,U)$where $x(u,v) = alpha(u) + v*(alpha'(u)), (u,v) in U$ where $U = {(u,v) in R^2 : -infinity < u < infinity, v>0}$ so i want to work out the guassian curvature of this surface. my solution just for E, F , G $E = Xu . Xu = 1 + (v^2)*(alpha''(u)^2) +2*v*t*alpha''(u)$ $F = Xu . Xv = ! + v*t*alpha''(u)$ $G = t.t =1$ and $F^2 = 1 + (v^2)*(alpha''(u)^2) +2*v*t*alpha''(u)$ so $EG - F^2 = 0$ where i used t = alpha'(u) from frenet serret equation, as txt =0 and t.t =1 im not what i done is right but thats how i got EG - f^2 for the guassian curvature as zero There are mistakes in your final computation. Here are the details: You have $X_u=T+v\kappa N$ and $X_v=T$. Since $(T,N,B)$ is orthonormal, $\|X_u\|^2=1+(v\kappa)^2$, $\|X_v\|^2=1$ and $X_u\cdot X_v=1$. Then $EG-F^2=(1+\kappa^2v^2)-1=\kappa^2v^2$. Note: you can notice that $EG-F^2=\|X_u\times X_v\|^2$. This is always true, so that your previous question already gave you the result... • March 12th 2009, 07:28 PM dopi Quote: Originally Posted by Laurent There are mistakes in your final computation. Here are the details: You have $X_u=T+v\kappa N$ and $X_v=T$. Since $(T,N,B)$ is orthonormal, $\|X_u\|^2=1+(v\kappa)^2$, $\|X_v\|^2=1$ and $X_u\cdot X_v=1$. Then $EG-F^2=(1+\kappa^2v^2)-1=\kappa^2v^2$. Note: you can notice that $EG-F^2=\|X_u\times X_v\|^2$. This is always true, so that your previous question already gave you the result... thanks for that it makes more sense now, could u check my answer for L,N,M and M^2 L= n.Xuu = n.(alpha''(u) + v*alpha'''(u)) M= n.Xuv = n.alpha''(u) N= n.Xvv = n.0 = 0 so therefore LN - M^2= -M^2 , as L=0 therefore -M^2 = -(n.alpha''(u))^2 so im assuming n.n =1 and therefore -M^2 = -alpha''(u)^2 thanks for the response • March 13th 2009, 08:45 AM Laurent Quote: Originally Posted by dopi thanks for that it makes more sense now, could u check my answer for L,N,M and M^2 L= n.Xuu = n.(alpha''(u) + v*alpha'''(u)) M= n.Xuv = n.alpha''(u) N= n.Xvv = n.0 = 0 so therefore LN - M^2= -M^2 , as L=0 therefore -M^2 = -(n.alpha''(u))^2 so im assuming n.n =1 and therefore -M^2 = -alpha''(u)^2 thanks for the response Hi, I don't understand how you got $L=0$. You didn't give your formula for $n$, and that's probably where there's a problem. Using Serret-Frenet coordinates (like in my previous post), we see that the tangent vectors $\partial_u X$ and $\partial_v X$ are linear combinations of $T$ and $N$. Therefore, $n=\pm B$ (the sign depends on the sign of the curvature, I let you find out). After computations, I find $L=v\kappa\tau$ (where $\tau$ is the torsion), and $N=M=0$. Can you find the same?
2016-08-28 02:30:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 53, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.937743067741394, "perplexity": 845.8662508447088}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982932803.39/warc/CC-MAIN-20160823200852-00240-ip-10-153-172-175.ec2.internal.warc.gz"}
https://ginger.readthedocs.io/en/latest/introduction.html
## What is Ginger¶ Ginger is a modern programming system that is built on top of a general purpose virtual machine. It comes with a modern programming language and library so that you can use to write your own applications. But the whole system is designed to be open, so you can extend or replace almost every part of the system. Why did we create Ginger when there are so many programming systems already? Because we love programming and love making neat, effective programs. But other systems have all kind of conveniences that end up being confusing; strange corner cases that are supposed to be convenient but just make things ugly; or reasonable looking compromises that cause lots of problems later on. So we designed Ginger to be a language that gets out of your way and lets you enjoy the experience of writing a program. It took us a long time, more than ten years, to figure out what we thought that meant. Gradually we distilled our design goals into some key rules. One rule, for example, is “the programmer is in charge (not the language designer)”. That affects the design of visibility declarations such as private/public; it requires that a programmer can get access to private variables for, say, debugging or writing unit tests. You may wonder how this can make sense - in which case turn to the chapter on packages. Another key rule is “if one, why not many?”. This rule means that anywhere in the language where there is a restriction to one item, consider making it many items. So in Ginger expressions don’t return just one value, they may return any number from 0, 1, ... and so on. And methods don’t just dispatch on one special ‘this’ argument, they may dispatch on 0, 1, 2 ... of their arguments. In fact a function is just a special kind of method that dispatches on 0 arguments. To understand how we interpreted our design rules you need to know a little about Ginger. ## Hello World in Common¶ In this introduction to Ginger, we will write our examples in what we call ‘Common’. Ginger supports more than one programming language syntax. But we designed Common to be a neat modern language that is easy to remember and accident-resistant. But you don’t have to use it. We also designed a Javascript-inspired syntax too, if you prefer. And in future versions of Ginger we will add more - it’s quite easy to add new ones. (Why did we make Ginger so flexible? Because one of the things we wanted to get away from was the idea that there was a single right answer.) So what does Common look like? Here’s a simple ‘hello, world!’ example. It shows quite a few useful features. I have added line numbers for easy reference. Line 1 # Prints a cheery message to the console. Line 2 define hello() =>> Line 3 println( "Hello, world!" ) Line 4 enddefine On Line 1 we write an end-of-line comment which is introduced with a hash symbol followed by a space. On Line 2 we introduce a function called ‘hello’. Function definitions start with the keyword ‘define’ and are closed with the matching keyword ‘enddefine’. This pairing of opening and closing keywords is used in many places in Common. The function head is separated from the function body by an ‘=>>’. There are several places where this double-headed arrow is used in Common and it always signals that a function is being defined. On Line 3 we define the function body as calling the ‘println’ function on a literal string. The name ‘println’ is a contraction of ‘PRINT then add a LiNe’ and the function is part of the standard library (ginger.std) . Programmers do not usually import the standard library because it is available by default. String literals use double-quotes, just like C/C++/C#, Java, Javascript and so on. Single quotes are reserved for symbol literals, which you will meet later on, but for now you can think of as a different kind of string. On Line 4 we close the function with the ‘enddefine’ keyword. If you are working interactively you can abbreviate any keyword that starts with the prefix ‘end’ to just ‘end’. This includes ‘enddefine’, ‘endfn’, ‘endif’, ‘endfor’.
2023-02-03 06:57:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30391737818717957, "perplexity": 1102.8545622198183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500044.16/warc/CC-MAIN-20230203055519-20230203085519-00274.warc.gz"}
https://physics.stackexchange.com/questions/linked/38348
169 views In Fermat's Principle of Least Time, how do we know that light is able to reach the end point? [duplicate] From my understanding of Fermat's Principle, you decide a start point and an end point for a light ray to travel between, and the light 'chooses' whichever path takes the least time (or technically ... 30 views Doubt regarding Fermat's principle [duplicate] Which two points are we talking about in Fermat's principle? Are those points decided by light or decided by us? Can we take any two points? 19 views How can we predict how a system evolves using the stationary action principle even though we need to specify the final state? [duplicate] The stationary action principle states that a system evolves between a fixed initial and fixed final configuration in such a way that the action is stationary. But isn't the final configuration what ... 2k views In the Principle of Least Action, how does a particle know where it will be in the future? In his book on Classical Mechanics, Prof. Feynman asserts that it just does. But if this is really what happens (& if the Principle of Least Action is more fundamental than Newton's Laws), then ... 1k views Question about the apparent loophole in principle of least action In Lagrangian formalism, given two points $(x_1,t_1)$ and $(x_2,t_2)$, we ask the question which paths $x(t)$ make the action $S=\displaystyle \int_{t_1}^{t_2}L\ \mathrm dt$ stationary and satisfy the ... 3k views Mathematically speaking, is there any essential difference between initial value problems and boundary value problems? The specification of the values of a function $f$ and the "velocities" $\frac{\... 3answers 730 views Can the Euler-Lagrange equations be derived from an infinitesimal Principle of Least Action? The Euler-Lagrange equations can be derived from the Principle of Least Action using integration by parts and the fact that the variation is zero at the end points. This has a mystical air about it, ... 3answers 712 views Is the path of stationary action unique? What are the physical implications of$L_{\dot{x}}=L_x$Below, for any function$Q$the notation$Q_x$means$\frac{\partial Q}{\partial x}$, and$Q_{xx}$means$\frac{\partial^2 Q}{\partial x^2}$. In physics, the trajectory of a particle is given by the ... 1answer 609 views “Principle of least action” and “Principle of conservation of energy”: Which one is fundamental and which one is derived? [closed] Suppose I throw a ball upwards. First it will rise under gravity and then fall under gravity. During the rising part the kinetic energy gradually decreases and the potential energy increases until ... 1answer 597 views Lagrangian mechanics and initial conditions vs boundary conditions It bothers me that many basic books on the classical mechanics don't discuss the following difference between "Newton's laws" and the "Principle of stationary action". Newton's laws can predict the ... 1answer 472 views Hamilton-Jacobi theory and initial value problem? Having read through some recent posts regarding the Lagrangian formulation being interpreted into an initial value problem rather than the familiar boundary condition problem we are familiar with, I ... 1answer 314 views Maximum aging and path of rock When a rock falls from a ledge, why does it head to the surface and not up to where time runs faster? If a rock, free from forces, follows a worldline of maximum aging, why would that rock approach ... 2answers 223 views Why can we consider the endpoint fixed in the derivation of the Euler-Lagrange equation in mechanics? In mechanics, we obtain the equations of motion (Euler-Lagrange equations) via Hamilton's principle by considering stationary points of the action $$S = \int_{t_i}^{t_f} L ~ dt$$ where we have$L=T-... I am citing from Landau and Lifschitz, this statement that will seem to you well-known, trivial, etc: "Between these positions, (i.e. $q_1$ and $q_2$) the system moves then in such a way that the ...
2019-11-15 05:34:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8867612481117249, "perplexity": 590.8088238775526}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668585.12/warc/CC-MAIN-20191115042541-20191115070541-00326.warc.gz"}
https://cs.stackexchange.com/questions/117797/weight-of-minimum-spanning-tree-of-g-and-t
# Weight of minimum spanning tree of G and T We have undirected weighted connective graph $$G=(V,E)$$, We also have a minimum spanning tree $$T$$ of $$G$$. Let $$v$$ be some vertex. We have new graphs, $$G'$$ and $$T'$$. $$G'$$ and $$T'$$ are same $$G$$ and $$T$$, except the extra node $$v$$ witch is connected (with new weighted edges) to the same nodes in both graphs. I need to prove that $$G'$$ MSP has the same weight as $$T'$$ MSP. I was thinking about using kruskal or prim algorithm to show that Given a MSP of $$T'$$, We can use kruskal or prim on $$G'$$ to find it, with no results. Any idea? • I don't understand the construction of $G'$ and $T'$ – lox Nov 29 '19 at 18:16 • Sorry I wasn't clear. $G' = ( V ⋃$ {$v$} $, E' )$. $E' = E ⋃$ { $(u,v) | u ∈ V$}. $T'$ is defined similarly. – usert Nov 29 '19 at 18:26 • So the new node $v$ is connected to all nodes in the original graph? – Bryce Kille Nov 29 '19 at 18:37 • Not necessarily, It can be connected to any node in $G$ and $T$. – usert Nov 29 '19 at 18:39 • Added a small example. – usert Nov 29 '19 at 18:42
2020-01-22 06:29:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8542854189872742, "perplexity": 316.3914264777119}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00546.warc.gz"}
http://hellenicaworld.com/Science/Mathematics/en/AdditiveFunction.html
### - Art Gallery - In number theory, an additive function is an arithmetic function f(n) of the positive integer n such that whenever a and b are coprime, the function of the product is the sum of the functions:[1] f(ab) = f(a) + f(b). An additive function f(n) is said to be completely additive if f(ab) = f(a) + f(b) holds for all positive integers a and b, even when they are not coprime. Totally additive is also used in this sense by analogy with totally multiplicative functions. If f is a completely additive function then f(1) = 0. Examples Example of arithmetic functions which are completely additive are: • The restriction of the logarithmic function to N. • The multiplicity of a prime factor p in n, that is the largest exponent m for which pm divides n. • a0(n) - the sum of primes dividing n counting multiplicity, sometimes called sopfr(n), the potency of n or the integer logarithm of n (sequence A001414 in the OEIS). For example: a0(4) = 2 + 2 = 4 a0(20) = a0(22 · 5) = 2 + 2+ 5 = 9 a0(27) = 3 + 3 + 3 = 9 a0(144) = a0(24 · 32) = a0(24) + a0(32) = 8 + 6 = 14 a0(2,000) = a0(24 · 53) = a0(24) + a0(53) = 8 + 15 = 23 a0(2,003) = 2003 a0(54,032,858,972,279) = 1240658 a0(54,032,858,972,302) = 1780417 a0(20,802,650,704,327,415) = 1240681 • The function Ω(n), defined as the total number of prime factors of n, counting multiple factors multiple times, sometimes called the "Big Omega function" (sequence A001222 in the OEIS). For example; Ω(1) = 0, since 1 has no prime factors Ω(4) = 2 Ω(16) = Ω(2·2·2·2) = 4 Ω(20) = Ω(2·2·5) = 3 Ω(27) = Ω(3·3·3) = 3 Ω(144) = Ω(24 · 32) = Ω(24) + Ω(32) = 4 + 2 = 6 Ω(2,000) = Ω(24 · 53) = Ω(24) + Ω(53) = 4 + 3 = 7 Ω(2,001) = 3 Ω(2,002) = 4 Ω(2,003) = 1 Ω(54,032,858,972,279) = 3 Ω(54,032,858,972,302) = 6 Ω(20,802,650,704,327,415) = 7 Example of arithmetic functions which are additive but not completely additive are: • ω(n), defined as the total number of different prime factors of n (sequence A001221 in the OEIS). For example: ω(4) = 1 ω(16) = ω(24) = 1 ω(20) = ω(22 · 5) = 2 ω(27) = ω(33) = 1 ω(144) = ω(24 · 32) = ω(24) + ω(32) = 1 + 1 = 2 ω(2,000) = ω(24 · 53) = ω(24) + ω(53) = 1 + 1 = 2 ω(2,001) = 3 ω(2,002) = 4 ω(2,003) = 1 ω(54,032,858,972,279) = 3 ω(54,032,858,972,302) = 5 ω(20,802,650,704,327,415) = 5 • a1(n) - the sum of the distinct primes dividing n, sometimes called sopf(n) (sequence A008472 in the OEIS). For example: a1(1) = 0 a1(4) = 2 a1(20) = 2 + 5 = 7 a1(27) = 3 a1(144) = a1(24 · 32) = a1(24) + a1(32) = 2 + 3 = 5 a1(2,000) = a1(24 · 53) = a1(24) + a1(53) = 2 + 5 = 7 a1(2,001) = 55 a1(2,002) = 33 a1(2,003) = 2003 a1(54,032,858,972,279) = 1238665 a1(54,032,858,972,302) = 1780410 a1(20,802,650,704,327,415) = 1238677 Multiplicative functions From any additive function f(n) it is easy to create a related multiplicative function g(n) i.e. with the property that whenever a and b are coprime we have: g(ab) = g(a) × g(b). One such example is g(n) = 2f(n). Summatory functions Given an additive function f, let its summatory function be defined by $${\displaystyle {\mathcal {M}}_{f}(x):=\sum _{n\leq x}f(n)}$$. The average of f is given exactly as $${\displaystyle {\mathcal {M}}_{f}(x)=\sum _{p^{\alpha }\leq x}f(p^{\alpha })\left(\left\lfloor {\frac {x}{p^{\alpha }}}\right\rfloor -\left\lfloor {\frac {x}{p^{\alpha +1}}}\right\rfloor \right).}$$ The summatory functions over f {\displaystyle f} f can be expanded as $${\displaystyle {\mathcal {M}}_{f}(x)=xE(x)+O({\sqrt {x}}\cdot D(x))}$$ where {\displaystyle {\begin{aligned}E(x)&=\sum _{p^{\alpha }\leq x}f(p^{\alpha })p^{-\alpha }(1-p^{-1})\\D^{2}(x)&=\sum _{p^{\alpha }\leq x}|f(p^{\alpha })|^{2}p^{-\alpha }.\end{aligned}}} The average of the function f 2 {\displaystyle f^{2}} f^{2} is also expressed by these functions as $${\displaystyle {\mathcal {M}}_{f^{2}}(x)=xE^{2}(x)+O(xD^{2}(x)).}$$ There is always an absolute constant $${\displaystyle C_{f}>0}$$ such that for all natural numbers $$x \geq 1$$, $${\displaystyle \sum _{n\leq x}|f(n)-E(x)|^{2}\leq C_{f}\cdot xD^{2}(x).}$$ Let $${\displaystyle \nu (x;z):={\frac {1}{x}}\#\left\{n\leq x:{\frac {f(n)-A(x)}{B(x)}}\leq z\right\}.}$$ Suppose that f is an additive function with $${\displaystyle -1\leq f(p^{\alpha })=f(p)\leq 1}$$ such that as $$x\rightarrow \infty$$ , $${\displaystyle B(x)=\sum _{p\leq x}f^{2}(p)/p\rightarrow \infty .}$$ Then $${\displaystyle \nu (x;z)\sim G(z)}$$ where G(z) is the Gaussian distribution function $${\displaystyle G(z)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{z}e^{-t^{2}/2}dt.}$$ Examples of this result related to the prime omega function and the numbers of prime divisors of shifted primes include the following for fixed $${\displaystyle z\in \mathbb {R} }$$ where the relations hold for $${\displaystyle x\gg 1}$$: $${\displaystyle \#\{n\leq x:\omega (n)-\log \log x\leq z(\log \log x)^{1/2}\}\sim xG(z),}$$ $${\displaystyle \#\{p\leq x:\omega (p+1)-\log \log x\leq z(\log \log x)^{1/2}\}\sim \pi (x)G(z).}$$ Prime omega function Multiplicative function Arithmetic function References Erdös, P., and M. Kac. On the Gaussian Law of Errors in the Theory of Additive Functions. Proc Natl Acad Sci USA. 1939 April; 25(4): 206–207. online Janko Bračič, Kolobar aritmetičnih funkcij (Ring of arithmetical functions), (Obzornik mat, fiz. 49 (2002) 4, pp. 97–108) (MSC (2000) 11A25) Iwaniec and Kowalski, Analytic number theory, AMS (2004). Mathematics Encyclopedia World Index
2021-03-04 13:21:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7959282398223877, "perplexity": 2131.7860045210414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369054.89/warc/CC-MAIN-20210304113205-20210304143205-00429.warc.gz"}
http://julia-harz.de/publication/deppisch-impact-2015/
# Impact of Neutrinoless Double Beta Decay on Models of Baryogenesis ### Abstract Interactions that manifest themselves as lepton number violating processes at low energies in combination with sphaleron transitions typically erase any pre-existing baryon asymmetry of the Universe. We demonstrate in a model independent approach that the observation of neutrinoless double beta decay would impose a stringent constraint on mechanisms of high-scale baryogenesis, including leptogenesis scenarios. Further, we discuss the potential of the LHC to model independently exclude high-scale leptogenesis scenarios when observing lepton number violating processes. In combination with the observation of lepton flavor violating processes, we can further strengthen this argument, closing the loophole of asymmetries being stored in different lepton flavors. Type Publication arXiv:1510.06305 [hep-ph]
2022-10-06 17:47:14
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8212278485298157, "perplexity": 1578.3803659204175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00474.warc.gz"}
https://www.albert.io/learn/act-science/question/butane-vapor-pressure
Limited access A student is conducting an experiment to determine what factors affect the vapor pressure of methanol. The student is measuring the vapor pressure of methanol using a mercury manometer, as pictured below. The experimental apparatus has an evacuated 1.0 liter bulb, into which the student injects a measured amount of methanol. The methanol then vaporizes to a gas until the vapor pressure equilibrates with the liquid, by saturating the bulb volume with methanol vapor. The temperature of the methanol is controlled by placing it in a water bath. The student measures the vapor pressure by determining the height of the mercury column after the methanol is injected. Experiment 1 Results The student first tried varying the volume of methanol injected into the bulb. Experiment 2 Results Next, the student tried varying the temperature of a fixed volume of methanol. The student made sure there was always methanol liquid in the glass bulb when the measurement of vapor pressure was taken. Experiment 3 Results Finally, the student tried measuring the vapor pressure of different substances, both with and without alcohol $(-OH)$ groups, at $20° C$. Based on the data from Experiment 3, which of the following values would be CLOSEST to the vapor pressure of butane? A $8.77 \ mmHg$ B $14.9 \ mmHg$ C $1290 \ mmHg$ D $6390 \ mmHg$ Select an assignment template
2017-03-24 21:49:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47642359137535095, "perplexity": 1343.4353365969248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188623.98/warc/CC-MAIN-20170322212948-00259-ip-10-233-31-227.ec2.internal.warc.gz"}
https://codemyroad.wordpress.com/2014/05/14/2048-ai-the-intelligent-bot/
Typical AI run # Introduction In this article, we develop a simple AI for the game 2048 using the Expectimax algorithm and “weight matrices”, which will be described below, to determine the best possible move at each turn. The implementation of the AI described in this article can be found here. The source files for the implementation can be found here. # Recursivity of score function We define some symbols and functions in order to construct a function to compute the score of any one game state (i.e. grid). • Game state: A $4 \times 4$ square matrix $\mathbf{A}$ where $\mathbf{A}_{ij}$ is equal to the value of the cell in the game grid at row $i$ and column $j$. • $\text{score}(\mathbf{A})$ : The numerical score of the game state $\mathbf{A}$. • $D_\mathbf{A}$ : The set of possible move directions (up, down, left, right) for $\mathbf{A}$. • $\text{move}(\mathbf{A}, d)$ : Produces a game state by applying a move operation to $\mathbf{A}$ in the direction $d \in D_\mathbf{A}$. • $S_\mathbf{A}$ : The set of possible game states from randomly spawning a new tile on $\mathbf{A}$. • $P(\mathbf{A}', \mathbf{A})$ : The probability of randomly spawning a new tile on $\mathbf{A}$ to produce $\mathbf{A}'$. The score function of game state $\mathbf{A}$ can then be written as $\displaystyle \text{score}(\mathbf{A}) = \sum_{A' \in S_\mathbf{A}}P(\mathbf{A}', \mathbf{A})\cdot\max_{d \in D_\mathbf{A'}}\text{score}(\text{move}(\mathbf{A}', d))$ # Score function at terminal states The following score function will be used instead when the recursive calculation described above reaches a termination state: $\displaystyle \text{score}_\text{terminal}(\mathbf{A}) = \underset{\mathbf{W}' \equiv \mathbf{W}}{\max} \mathbf{W}' \circ \mathbf{A}$ where • $\mathbf{W}$ : A $4 \times 4$ square weight matrix to be defined by the programmer. • $\mathbf{W} \circ \mathbf{A}$ : The entrywise product of $\mathbf{W}$ and $\mathbf{A}$; that is, $(\mathbf{W} \circ \mathbf{A})_{ij} = \mathbf{W}_{ij}\mathbf{A}_{ij}$. • $\mathbf{W}' \equiv \mathbf{W}$ is said to hold true if and only if $\mathbf{W}'$ can be produced from $\mathbf{W}$ by some sequence of clockwise-rotating (such that $\mathbf{Y}_{ij} = \mathbf{X}_{(j - 5)i)}$) and transposing (such that $\mathbf{Y}_{ij} = \mathbf{X}_{ji}$). The recursive calculation can reach a terminal state under two circumstances. The first corresponds to a “game over” situation in the game – a random tile is spawned such that the user can no longer make any moves. That is, $D_\mathbf{A} = \emptyset$. The second circumstance occurs when the calculation reaches a recursion depth limit predefined by the programmer. Such a search limit is necessary because it is practically very hard if not impossible for the AI to go through all the possible “game over” situations – the number of such cases increases exponentially with recursion depth.  As such it is crucial for a recursion depth limit to be in place in order for the AI to compute the best move fast enough. # Score function formula and decision function Putting the recursive score function and terminal score function together, we have our combined score function $\displaystyle \text{score}(\mathbf{A}) = \sum_{A' \in S_\mathbf{A}}P(\mathbf{A}', \mathbf{A})\cdot\begin{cases}\underset{\mathbf{W}' \equiv \mathbf{W}}{\max} \mathbf{W}' \circ \mathbf{A}', & \text{if terminate}\\ \underset{d \in D_\mathbf{A'}}{\max}~\text{score}(\text{move}(\mathbf{A}', d)), & \text{otherwise} \end{cases}$ The above score function can then be used to decide which move to make at each turn after a new tile has been spawned. We simply pick the move that results in the highest scoring game state. Explicitly: denote the game state, after the new tile has been spawned, as $\mathbf{A}$. Then the best move is given by $\displaystyle \text{decision}(\mathbf{A}) = \underset{d \in D_\mathbf{A}}{\text{argmax}}~\text{score}(\text{move}(\mathbf{A}, d))$ It seems that our AI is more or less complete… except that we have not decided on the values that the weight matrix $\mathbf{W}$ should contain! # The weight matrix A common strategy, is to push the bigger tiles near any one corner and the smaller tiles away from that corner. With a weight matrix, we can mimic this strategy by setting the values of $\mathbf{W}$ such that the weight decreases from the top left to the bottom right. (It can also be from the top right to the bottom left, but they are the same because all possible rotations and transpositions of $\mathbf{W}$ are checked during the computation of the terminal scores. A simple example is $\mathbf{W} = \left(\begin{matrix}7&6&5&4\\6&5&4&3\\5&4&3&2\\4&3&2&1\end{matrix}\right)$ With the above example, the AI tends to make moves such that the bigger tiles are closer to the corner than the smaller tiles area, which agrees with the desired strategy. Of course, there is no reason for the corner tile to be $\frac{7}{6}\approx$ times more “attractive” than its adjacent tiles, and the same applies for the other tiles. As such the above weight matrix may not be the most ideal. An optimization search carried out using randomly generated, diagonally monotone decreasing weight matrices produces the following matrix as the most optimal (among the candidate matrices): $\displaystyle \mathbf{W} = \left( \begin{matrix} 0.135759&0.121925&0.102812&0.099937\\ 0.0997992&0.0888405&0.076711&0.0724143\\ 0.060654&0.0562579&0.037116&0.0161889\\ 0.0125498&0.00992495&0.00575871&0.00335193 \end{matrix}\right)$ It is possible to use other optimization methods, such as Genetic Algorithms and particle swarm optimization (PSO), but the details off these other methods will not be described here. # Results and conclusion Using the described score function and weight matrix, an AI was successfully implemented (here) which can achieve the 4096 tile more than 40% of the time with recursion depth of 6, and at least the 8192 tile more than 30% of the time with recursion depth 8. One may also wish to view a full run of the AI with recursion depth of 8 over here on Youtube. The AI described here is considerably simple but its performance may indeed be limited. In order to improve on the AI, one can consider other heuristics and strategies, such as those described in the discussion over here at StackExchange. Nonetheless, we hope that you have enjoyed this article, and that it has provided you with some valuable insight of the basic, general workings of a 2048 AI, specifically the Expectimax algorithm.
2018-08-18 22:31:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 41, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7299560308456421, "perplexity": 492.50932192786496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213794.40/warc/CC-MAIN-20180818213032-20180818233032-00076.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?p=256008
## Module Question $\Delta p \Delta x\geq \frac{h}{4\pi }$ Moderators: Chem_Mod, Chem_Admin Jolie Sukonik 2B Posts: 55 Joined: Wed Sep 30, 2020 9:44 pm ### Module Question The hydrogen atom has a radius of approximately 0.05 nm. Assume that we know the position of an electron to an accuracy of 1 % of the hydrogen radius, calculate the uncertainty in the speed of the electron using the Heisenberg uncertainty principle. How do I find the uncertainty in the position first? I know once I find it I can just calculate the uncertainty in velocity, but I am struggling on how to find deltax. Q Scarborough 1b Posts: 70 Joined: Wed Sep 30, 2020 9:59 pm Been upvoted: 2 times ### Re: Module Question If you know the radius of a Hydrogen atom, you can just find 1% of that to find your uncertainty. In this case it would be .05nm * .01= 5*10^-13, which would be your uncertainty position. Akshata Kapadne 2K Posts: 73 Joined: Wed Sep 30, 2020 9:40 pm ### Re: Module Question Since it says that we know the position of an electron to an accuracy of 1 % of the hydrogen radius, and the radius is 0.05 nm, the uncertainty in position is .01(.05 nm)= 5 x 10^-13 m. From there, you can find delta(v). Return to “Heisenberg Indeterminacy (Uncertainty) Equation” ### Who is online Users browsing this forum: No registered users and 2 guests
2021-01-17 16:00:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27818524837493896, "perplexity": 1392.219490110403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703513062.16/warc/CC-MAIN-20210117143625-20210117173625-00683.warc.gz"}
https://hyperelectronic.net/wiki/resistor/resistor-color-code/
# Resistor Color Code The electronic color code is used to identify the values and tolerances of resistors. They generally consist of multiples band on a resistor. There is a 4, 5 and 6 band color code for resistors. The 4 and 5 band color code are generally the one you gonna see the most. The 6 band color code is much more rare. The 5 band color code has one more digits than the 4 band color code and it is used for more precise values. The 6 band color code has an additional band for the temperature coefficient and this is why the 6 band color code is more rare. They are some exceptions in the resistor color code. For example, a 0 Ohm resistor only have one black band. ##### 4 Band Resistor Color Code To read a 4 band resistor color code, you need the chart above to know exactly what every color and band positions means. Example : For a 470 ohms resistors with a tolerance of 10% : the first band would be the first digit (4) which is yellow and the second band is the second digit (7) which is violet. We currently have a value of 47 and the third band is the multiplier. We need to multiply 47 by 10Ω to get 470Ω. The 4th band is the tolerance and would be silver for a tolerance of 10%. Figure 1 is an illustration of how these bands would be arrange on a real resistor. $47*10\Omega=470\Omega$ ##### 5 Band Resistor Color Code To read a 5 band resistor color code, you need the chart above to know exactly what every color and band positions means. Example : For a 499 ohms resistors with a tolerance of 1% : the first band would be the first digit (4) which is yellow, the second band is the second digit (9) which is white and the third band would be the third digit (9) which is white. We currently have a value of 499 and the fourth band is the multiplier. We need to multiply 499 by 1Ω (black) to get 499Ω. The 5th band is the tolerance and would be brown for a tolerance of 1%. Figure 3 is an illustration of how these bands would be arrange on a real resistor. $499*1\Omega=499\Omega$
2023-01-30 15:28:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.463975727558136, "perplexity": 716.0206860790313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499819.32/warc/CC-MAIN-20230130133622-20230130163622-00547.warc.gz"}
http://physics.stackexchange.com/tags/motion/new
# Tag Info 2 Objects in orbit come pretty close. If you don't mind venting the cabin or taking a walk outside, even air drag can be very nearly eliminated. All you have left are very small forces due to being in a non-inertial reference frame, and drag from the very, very thin atmosphere. Neither would be noticed without some very precise equipment. To reduce these ... 1 Have you heard of superfluidity? It happens when you cool liquid helium below about 2 Kelvin. The helium then will flow freely and without any friction. If you induce a current vortex in liquid helium, it will remain flowing until the end of time (however, you cannot draw energy from it, because the liquid is frictionless) or until it warms up again. So, in ... 3 (Temporarily pretend you are on the second floor of a building.) Jump up and down in one place. After you leave the floor: your displacement is always positive (i.e., above the floor); your velocity is positive (rising), zero (at your highest displacement), then negative (falling back to the floor); and your acceleration is constant and negative -- it's ... 11 Get on a car, turn it on and press the accelerator! (${{a}}>0$)... then press the brake (${{a}}<0$). Until the car is steady, the situation is as you described. 6 Of course. Acceleration is the rate of change of the velocity. For motion in a line, if the object is slowing down, the acceleration is opposite the velocity. If the object is speeding up, the acceleration is in the direction of the velocity. Imagine you're pedalling a bike, gaining more and more speed and then, suddenly, stop pedalling and apply the ... 5 Absolutely. Acceleration is the change in velocity, so when you say that the acceleration reverses in direction, it means that the object is transferring from either speeding up to slowing down, like a skateboard which just passed the bottom of a U-shaped ramp, or from slowing down to speeding up. Think of the skateboard as having just crested a hill. ... 30 A pendulum is a day to day example of this. If you watch a pendulum swinging from left to right as it passes the mid point the velocity and acceleration are: The acceleration always point towards the mid point, so as the pendulum passes through the mid point the acceleration reverses direction but the velocity does not. Top 50 recent answers are included
2014-07-24 02:25:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6344262361526489, "perplexity": 619.8213583244184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997884827.82/warc/CC-MAIN-20140722025804-00060-ip-10-33-131-23.ec2.internal.warc.gz"}
http://melikamp.com/math/teaching/calculus/fundamental-thm-line-integrals.shtml
### The Fundamental Theorem For Line Integrals (requires JavaScript) 1. Determine whether or not $\mathbf{F}\left(x,y\right)=⟨6x+5y,5x+4y⟩$ is a conservative vector field. If it is, find a function $f$ such that $\mathbf{F}=\nabla f$. $f\left(x,y\right)=3{x}^{2}+5xy+2{y}^{2}+K$ 2. Determine whether or not $\mathbf{F}\left(x,y\right)=x{e}^{y}\mathbf{i}+y{e}^{x}\mathbf{j}$ is a conservative vector field. If it is, find a function $f$ such that $\mathbf{F}=\nabla f$. Not conservative. 3. Determine whether or not $\mathbf{F}\left(x,y\right)=\left(1+2xy+\mathrm{ln}\left(x\right)\right)\mathbf{i}+{x}^{2}\mathbf{j}$ is a conservative vector field. If it is, find a function $f$ such that $\mathbf{F}=\nabla f$. 4. Let $\mathbf{F}\left(x,y,z\right)=yz\mathbf{i}+xz\mathbf{j}+\left(xy+2z\right)\mathbf{k}$ and let $C$ be the line segment from $\left(1,0,-2\right)$ to $\left(4,6,3\right)$. Find a function $f$ such that $\mathbf{F}=\nabla f$ and use it to evaluate ${\int }_{C}\mathbf{F}•\phantom{\rule{0.2em}{0ex}}d\mathbf{r}$. $f\left(x,y,z\right)=xyz+{z}^{2}$ and $77$. 5. Show that the line integral ${\int }_{C}\left(1-y{e}^{-x}\right)\phantom{\rule{0.2em}{0ex}}dx+{e}^{-x}\phantom{\rule{0.2em}{0ex}}dy$ is independent of path and find its value along a path from $\left(0,1\right)$ to $\left(1,2\right)$. 6. Let $\mathbf{F}\left(x,y\right)=P\left(x,y\right)\mathbf{i}+Q\left(x,y\right)\mathbf{j}=\frac{-y\mathbf{i}+x\mathbf{j}}{{x}^{2}+{y}^{2}}$. Show that $\frac{\partial P}{\partial y}=\frac{\partial Q}{\partial x}$, but ${\int }_{C}\mathbf{F}•\phantom{\rule{0.2em}{0ex}}d\mathbf{r}$ is not independent of path. (Hint: Compute the integral along two different paths from $\left(1,0\right)$ to $\left(-1,0\right)$ along the unit circle.) 1. Let $\mathbf{F}$ be an inverse square force field: $\mathbf{F}\left(\mathbf{r}\right)=\frac{c\mathbf{r}}{{\left|\mathbf{r}\right|}^{3}}$ for some constant $c$, where $\mathbf{r}=⟨x,y,z⟩$. Find the work done by $\mathbf{F}$ on an object which moves from a point ${P}_{1}$ to a point ${P}_{2}$ in terms of distances ${d}_{1}$ and ${d}_{2}$ from these points to the origin. 2. Let $\mathbf{F}$ be the gravitational force field, $\mathbf{F}\left(\mathbf{r}\right)=\frac{-mMG\mathbf{r}}{{\left|\mathbf{r}\right|}^{3}}$. Find the work done by the gravitational field due to the Sun as the Earth moves from aphelion (${d}_{1}=1.52×{10}^{8}$ km) to perihelion (${d}_{2}=1.47×{10}^{8}$ km). Use values $m=5.97×{10}^{24}$ kg, $M=1.99×{10}^{30}$ kg, and $G=6.67×{10}^{-11}$ $\mathrm{N}{\mathrm{m}}^{2}/{\mathrm{kg}}^{2}$. 3. Let $\mathbf{F}$ be the electric force field, $\mathbf{F}\left(\mathbf{r}\right)=\frac{\epsilon qQ\mathbf{r}}{{\left|\mathbf{r}\right|}^{3}}$. Suppose that an electron with a charge of $-1.6×{10}^{-19}$ C is located at the origin. Find the work done by the electric field due to the electron on a proton as the latter moves from the distance of ${10}^{-12}$ m from the electron to half that distance. Use the value of $\epsilon =8.985×{10}^{9}$.
2022-01-21 08:56:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 49, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9537177085876465, "perplexity": 67.82538615721027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302740.94/warc/CC-MAIN-20220121071203-20220121101203-00039.warc.gz"}
https://metricsystem.net/si/base-units/kelvin/
# kelvin ##### SI base unit Name Symbol Quantity kelvin K thermodynamic temperature ### Definition The kelvin, symbol K, is the SI base unit of thermodynamic temperature. The kelvin is defined by taking the fixed numerical value of the Boltzmann constant k to be 1.380 649 × 10−23 when expressed in the unit J K−1, which is equal to kg m2 s−2 K−1, where the kilogram, metre and second are defined in terms of h, c and ΔνCs. The definition of the kelvin implies the exact relation k = 1.380 649 × 10–23 kg m2 s–2 K–1. Inverting this relation gives an exact expression for the kelvin in terms of the defining constants h, ΔνCs and k : $1 \mspace{4mu} \text{K} \mspace{10mu} = \dfrac{1.380 \mspace{4mu} 649 \times 10^{-23}}{k} \mspace{6mu} \text{kg} \mspace{4mu} \text{m}{^2} \mspace{4mu} \text{s}{^{-2}}\\ \\ \\ 1 \mspace{4mu} \text{K} \mspace{10mu} = \dfrac{1.380 \mspace{4mu} 649 \times 10^{-23}}{(6.626 \mspace{4mu} 070 \mspace{4mu} 15 \times 10^{-34})(9 \mspace{4mu} 192 \mspace{4mu} 631 \mspace{4mu} 770))} \mspace{6mu} \dfrac{h \mspace{6mu} \Delta \nu _{Cs}}{k}\\ \\ \\ 1 \mspace{4mu} \text{K} \mspace{10mu} = 2.266 \mspace{4mu} 665 \mspace{4mu} 264 \mspace{4mu} 601 \mspace{4mu} 104 \mspace{4mu} 867 \text{...} \mspace{6mu} \dfrac{h \mspace{6mu} \Delta \nu _{Cs}}{k}$ The effect of this definition is that one kelvin is equal to the change of thermodynamic temperature that results in a change of thermal energy k T by 1.380 649 × 10–23 J. The kelvin is named after the British physicist and engineer William Thomson, 1st Baron Kelvin (1824 – 1907). ### Thermodynamic temperature Thermodynamic temperature is a measure of the average kinetic energy of the particles in a substance. The absolute temperature of a gas is directly proportional to the average kinetic energy of its molecules. The Kelvin scale is an absolute thermodynamic temperature scale using as its null point absolute zero, the temperature at which all thermal motion ceases in the classical description of thermodynamics. ### Ideal gas law The ideal gas law, also called the general gas equation, is the equation of state of a hypothetical ideal gas. It approximates the behaviour of gases under many conditions. Using SI coherent units, $p V = n R T$ where: • p is the pressure in pascals, symbol Pa, • V is the volume in cubic metres, symbol m3, • T is the absolute temperature in kelvins, symbol K, • n is the amount of gas in moles, symbol mol, • R is the ideal gas constant, in J K−1 mol−1. The ideal gas constant, R, is equal to the product of two of the SI defining constants – the Boltzmann constant, k, and the Avogadro constant, NA. Substituting the Boltzmann constant gives an alternative form of the general gas equation: $p V = n k N_A T\\ \\ p V = N k T$ where: • N is the number of molecules of gas, • k is the Boltzmann constant, in J K−1, • NA is the Avogadro constant, in mol-1. ### Kinetic temperature Molecular kinetic theory relates the pressure and volume of a gas to the average molecular kinetic energy. Using SI coherent units, $p V = \dfrac{2}{3}\ N \left( \ \overline{\dfrac{1}{2}m v^2} \ \right)$ where: • p is the pressure in pascals, symbol Pa, • V is the volume in cubic metres, symbol m3, • N is the number of molecules of gas, • 12 mv2 is the average kinetic energy of the gas molecules, in joules, symbol J. Combining this with the ideal gas law gives an expression for temperature, sometimes referred to as the kinetic temperature. $N k T = \dfrac{2}{3}\ N \left( \ \overline{\dfrac{1}{2}m v^2} \ \right) \\ \\ \\ k T \mspace{16mu} = \dfrac{2}{3} \left( \ \overline{\dfrac{1}{2}m v^2} \ \right) \\ \\ \\ T \mspace{26mu} = \dfrac{2}{3}\ \dfrac{1}{k} \left( \ \overline{\dfrac{1}{2}m v^2} \ \right)$ where: • T is the absolute temperature in kelvins, symbol K, • k is the Boltzmann constant, in J K−1. It can be seen that the Boltzmann constant is a proportionality constant which relates the average relative kinetic energy of particles in a gas to the thermodynamic temperature of the gas. ### Triple point of water The triple point of water is the unique combination of temperature and pressure at which ice, liquid water and water vapour can all coexist in thermodynamic equilibrium. The previous definition of the kelvin set the temperature of the triple point of water, symbol TTPW, to be exactly 273.16 K. Due to the fact that the current definition of the kelvin fixes the numerical value of the Boltzmann constant, k, instead of TTPW, the latter must now be determined experimentally. At the time of adopting the current definition, TTPW was equal to 273.16 K with a relative standard uncertainty of 3.7 × 10−7 based on measurements of k made prior to the redefinition. The triple point of water occurs at a partial vapour pressure of 611.66 Pa. The spectral radiance of a body varies with its temperature. It is a description of the amount of energy that it emits at different electromagnetic radiation frequencies. Spectral radiance is the power emitted per unit area of the body, per unit solid angle of emission, per unit frequency. A black body is an idealised object which absorbs and emits all frequencies of electromagnetic radiation. Planck’s radiation law describes the spectral density of electromagnetic radiation emitted by a black body in thermal equilibrium at a given temperature, when there is no net flow of energy between the body and its surroundings. Planck’s law shows that for any given temperature, there is a unique wavelength of electromagnetic radiation where the spectral radiance is at a maximum. At higher temperatures the wavelength of the peak radiance is shorter. For example, at a temperature of about 4000 K the peak radiance occurs at the red end of the visible spectrum, and at 7600 K it is at the violet end. ### Wien’s displacement law Wien’s displacement law encapsulates the relationship described in Planck’s law between the wavelength of the peak radiance and the temperature of a black body. Wien’s law states that the black body radiation curve for a given temperature has a peak value at a wavelength that is inversely proportional to the temperature. Using SI coherent units, the proportionality constant is Wien’s displacement constant, b: $\lambda_{\text{max}} = \dfrac{b}{T}$ where: • λmax is the wavelength in metres, symbol m, • T is the absolute temeprature in kelvins, symbol K, • b is Wien’s displacement constant in metre kelvins, symbol m K. Wien’s displacement constant is 2.897 771 955 … × 10−3 m K. ### Colour temperature The kelvin is used as a measure of the colour temperature of light sources. Colour temperature is based upon the principle that a black body radiator emits light whose colour depends on the temperature of the radiator. Black bodies with temperatures below about 4000 K appear reddish, whereas those above about 7500 K appear bluish. Temperature in K Colour temperature is important in the fields of image projection and photography, where a colour temperature of approximately 5600 K is required to match “daylight” film emulsions. Image editing software and digital cameras often use colour temperature in K for colour balancing. The higher the colour temperature, the more white or blue the image will be. A reduction in colour temperature gives an image more dominated by reddish, “warmer” colours. In astronomy, the stellar classification of stars and their place on the Hertzsprung-Russell diagram are based, in part, upon their surface temperature, known as effective temperature. The photosphere of the Sun has an effective temperature of 5778 K. ### Noise temperature In electronics, the kelvin is used as an indicator of how noisy a circuit is in relation to an ultimate noise floor, i.e. the noise temperature. The Johnson-Nyquist noise of discrete resistors and capacitors is a type of thermal noise derived from the Boltzmann constant and can be used to determine the noise temperature of a circuit using the Friis formulas for noise.
2023-02-06 04:07:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8307225108146667, "perplexity": 451.0891107531461}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500303.56/warc/CC-MAIN-20230206015710-20230206045710-00722.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-connecting-concepts-through-application/chapter-4-quadratic-functions-4-5-solving-equations-by-factoring-4-5-exercises-page-359/27
## Intermediate Algebra: Connecting Concepts through Application $\color{blue}{\left\{-4, 0\right\}}$ Factor out $5x$ to obtain: $5x(x+4)=0$ RECALL: The Zero-Factor Property states that if $ab=0$, then $a=0$ or $b=0$, or both are zero. Use the Zero-Factor Property by equating each factor to zero to obtain: $5x=0$ or $x+4=0$ Solve each equation to obtain: $5x=0 \\\frac{5x}{5}=\frac{0}{5} \\x=0$ or $x+4=0 \\x=0-4 \\x=-4$ Therefore, the solution set is $\color{blue}{\left\{-4, 0\right\}}$.
2018-05-26 12:09:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7147462964057922, "perplexity": 387.7156135530964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867417.71/warc/CC-MAIN-20180526112331-20180526132331-00088.warc.gz"}
https://en.wikipedia.org/wiki/Strategy_dynamics
# Strategy dynamics The word ‘dynamics’ appears frequently in discussions and writing about strategy, and is used in two distinct, though equally important senses. The dynamics of strategy and performance concerns the ‘content’ of strategy – initiatives, choices, policies and decisions adopted in an attempt to improve performance, and the results that arise from these managerial behaviors. The dynamic model of the strategy process is a way of understanding how strategic actions occur. It recognizes that strategic planning is dynamic, that is, strategy-making involves a complex pattern of actions and reactions. It is partially planned and partially unplanned. A literature search shows the first of these senses to be both the earliest and most widely used meaning of ‘strategy dynamics’, though that is not to diminish the importance of the dynamic view of the strategy process. ## Static models of strategy and performance The static assessment of strategy and performance, and its tools and frameworks dominate research, textbooks and practice in the field. They stem from a presumption dating back to before the 1980s that market and industry conditions determine how firms in a sector perform on average, and the scope for any firm to do better or worse than that average. E.g. the airline industry is notoriously unprofitable, but some firms are spectacularly profitable exceptions. The ‘industry forces’ paradigm was established most firmly by Michael Porter, (1980) in his seminal book ‘Competitive Strategy’, the ideas of which still form the basis of strategy analysis in many consulting firms and investment companies. Richard Rumelt (1991) was amongst the first to challenge this presumption of the power of ‘industry forces’, and it has since become well-understood that business factors are more important drivers of performance than are industry factors – in essence, this means you can do well in difficult industries, and struggle in industries where others do well. Although the relative importance of industry factors and firm-specific factors continues to be researched, the debate is now essentially over – management of strategy matters. The increasing interest in how some businesses in an industry perform better than others led to the emergence of the ‘resource based view’ (RBV) of strategy (Wernerfelt, 1984; Barney, 1991; Grant, 1991), which seeks to discover the firm-specific sources of superior performance – an interest that has increasingly come to dominate research. ## The need for a dynamic model of strategy and performance The debate about the relative influence of industry and business factors on performance, and the RBV-based explanations for superior performance both, however, pass over a more serious problem. This concerns exactly what the ‘performance’ is that management seeks to improve. Would you prefer, for example, (A) to make \$15m per year indefinitely, or (B) \$12m this year, increasing by 20% a year, starting with the same resources? Nearly half a century ago, Edith Penrose (1959) pointed out that superior profitability (e.g. return on sales or return on assets) was neither interesting to investors – who value the prospect of increasing future cash flows – nor sustainable over time. Profitability is not entirely unimportant – it does after all provide the investment in new resources to enable growth to occur. More recently, Rugman and Verbeke (2002) have reviewed the implications of this observation for research in strategy. Richard Rumelt (2007) has again raised the importance of making progress with the issue of strategy dynamics, describing it as still ‘the next frontier … underresearched, underwritten about, and underunderstood’. The essential problem is that tools explaining why firm A performs better than firm B at a point in time are unlikely to explain why firm B is growing its performance more rapidly than firm A. This is not just of theoretical concern, but matters to executives too – efforts by the management of firm B to match A’s profitability could well destroy its ability to grow profits, for example. A further practical problem is that many of the static frameworks do not provide sufficiently fine-grained guidance on strategy to help raise performance. For example, an investigation that identifies an attractive opportunity to serve a specific market segment with specific products or services, delivered in a particular way is unlikely to yield fundamentally different answers from one year to the next. Yet strategic management has much to do from month to month to ensure the business system develops strongly so as to take that opportunity quickly and safely. What is needed, is a set of tools that explain how performance changes over time, and how to improve its future trajectory – i.e. a dynamic model of strategy and performance. ## A possible dynamic model of strategy and performance To develop a dynamic model of strategy and performance requires components that explain how factors change over time. Most of the relationships on which business analysis are based describe relationships that are static and stable over time. For example, “profits = revenue minus costs”, or “market share = our sales divided by total market size” are relationships that are true. Static strategy tools seek to solve the strategy problem by extending this set of stable relationships, e.g. “profitability = some complex function of product development capability”. Since a company’s sales clearly change over time, there must be something further back up the causal chain that makes this happen. One such item is ‘customers’ – if the firm has more customers now than last month, then (everything else being equal), it will have more sales and profits. The number of ‘Customers’ at any time, however, cannot be calculated from anything else. It is one example of a factor with a unique characteristic, known as an ‘asset-stock’. This critical feature is that it accumulates over time, so “customers today = customers yesterday +/- customers won and lost”. This is not a theory or statistical observation, but is axiomatic of the way the world works. Other examples include cash (changed by cash-in and cash-out-flows), staff (changed by hiring and attrition), capacity, product range and dealers. Many intangible factors behave in the same way, e.g. reputation and staff skills. Dierickx and Cool (1989) point out that this causes serious problems for explaining performance over time: • Time compression diseconomies i.e. it takes time to accumulate resources. • Asset Mass Efficiencies ‘the more you have, the faster you can get more’.. • Interconnectedness of Asset Stocks .. building one resource depends on other resources already in place. • Asset erosion .. tangible and intangible assets alike deteriorate unless effort and expenditure are committed to maintaining them • Causal ambiguity .. it can be hard to work out, even for the firm who owns a resource, why exactly it accumulates and depletes at the rate it does. The consequences of these features is that relationships in a business system are highly non-linear. Statistical analysis will not, then, be able meaningfully to confirm any causal explanation for the number of customers at any moment in time. If that is true then statistical analysis also cannot say anything useful about any performance that depends on customers or on other accumulating asset-stocks – which is always the case. Fortunately, a method known as system dynamics captures both the math of asset-stock accumulation (i.e. resource- and capability-building), and the interdependence between these components (Forrester, 1961; Sterman, 2000). The asset-stocks relevant to strategy performance are resources [things we have] and capabilities [things we are good at doing]. This makes it possible to connect back to the resource-based view, though with one modification. RBV asserts that any resource which is clearly identifiable, and can easily be acquired or built, cannot be a source of competitive advantage, so only resources or capabilities that are valuable, rare, hard to imitate or buy, and embedded in the organization [the ‘VRIO’ criteria] can be relevant to explaining performance, for example reputation or product development capability. Yet day-to-day performance must reflect the simple, tangible resources such as customers, capacity and cash. VRIO resources may be important also, but it is not possible to trace a causal path from reputation or product development capability to performance outcomes without going via the tangible resources of customers and cash. Warren (2002, 2007) brought together the specification of resources [tangible and intangible] and capabilities with the math of system dynamics to assemble a framework for strategy dynamics and performance with the following elements: • Performance, P, at time t is a function of the quantity of resources R1 to Rn, discretionary management choices, M, and exogenous factors, E, at that time (Equation 1). (1) P(t) = f{R1(t), .. Rn(t), M(t), E(t)} • The current quantity of each resource Ri at time t is its level at time t-1 plus or minus any resource-flows that have occurred between t-1 and t (Equation 2). (2) Ri(t) = Ri (t-1) +/- ${\displaystyle \bigtriangleup }$ Ri(t-1 .. t) • The change in quantity of Ri between time t-1 and time t is a function of the quantity of resources R1 to Rn at time t-1, including that of resource Ri itself, on management choices, M, and on exogenous factors E at that time (Equation 3). (3) ${\displaystyle \bigtriangleup }$ Ri(t-1 .. t) = f{R1(t-1), .. Rn(t-1), M(t-1), E(t-1)} This set of relationships gives rise to an ‘architecture’ that depicts, both graphically and mathematically, the core of how a business or other organization develops and performs over time. To this can be added other important extensions, including : • the consequence of resources varying in one or more qualities or ‘attributes’ [e.g. customer size, staff experience] • the development of resources through stages [disloyal and loyal customers, junior and senior staff] • rivalry for any resource that may be contested [customers clearly, but also possibly staff and other factors] • intangible factors [e.g. reputation, staff skills] • capabilities [e.g. product development, selling] ## The Static Model of the Strategy Process According to many introductory strategy textbooks, strategic thinking can be divided into two segments : strategy formulation and strategy implementation. Strategy formulation is done first, followed by implementation. Strategy formulation involves: 1. Doing a situation analysis: both internal and external; both micro-environmental and macro-environmental. 2. Concurrent with this assessment, objectives are set. This involves crafting vision statements (long term), mission statements (medium term), overall corporate objectives (both financial and strategic), strategic business unit objectives (both financial and strategic), and tactical objectives. 3. These objectives should, in the light of the situation analysis, suggest a strategic plan. The plan provides the details of how to obtain these goals. This three-step strategy formation process is sometimes referred to as determining where you are now, determining where you want to go, and then determining how to get there. The next phase, according to this linear model is the implementation of the strategy. This involves: • Allocation of sufficient resources (financial, personnel, time, computer system support) • Establishing a chain of command or some alternative structure (such as cross-functional teams) • Assigning responsibility of specific tasks or processes to specific individuals or groups • Managing the process. This includes monitoring results, comparing to benchmarks and best practices, evaluating the efficacy and efficiency of the process, controlling for variances, and making adjustments to the process as necessary. • When implementing specific programs, this involves acquiring the requisite resources, developing the process, training, process testing, documentation, and integration with (and/or conversion from) legacy processes ## The Dynamic Model of the Strategy Process Several theorists have recognized a problem with this static model of the strategy process: it is not how strategy is developed in real life. Strategy is actually a dynamic and interactive process. Some of the earliest challenges to the planned strategy approach came from Linblom in the 1960s and Quinn in the 1980s. Charles Lindblom (1959) claimed that strategy is a fragmented process of serial and incremental decisions. He viewed strategy as an informal process of mutual adjustment with little apparent coordination. James Brian Quinn (1978) developed an approach that he called "logical incrementalism". He claimed that strategic management involves guiding actions and events towards a conscious strategy in a step-by-step process. Managers nurture and promote strategies that are themselves changing. In regard to the nature of strategic management he says: "Constantly integrating the simultaneous incremental process of strategy formulation and implementation is the central art of effective strategic management." (?page 145). Whereas Lindblom saw strategy as a disjointed process without conscious direction, Quinn saw the process as fluid but controllable. Joseph Bower (1970) and Robert Burgelman (1980) took this one step further. Not only are strategic decisions made incrementally rather than as part of a grand unified vision, but according to them, this multitude of small decisions are made by numerous people in all sections and levels of the organization. Henry Mintzberg (1978) made a distinction between deliberate strategy and emergent strategy. Emergent strategy originates not in the mind of the strategist, but in the interaction of the organization with its environment. He claims that emergent strategies tend to exhibit a type of convergence in which ideas and actions from multiple sources integrate into a pattern. This is a form of organizational learning, in fact, on this view, organizational learning is one of the core functions of any business enterprise (See Peter Senge's The Fifth Discipline (1990).) Constantinos Markides (1999) describes strategy formation and implementation as an ongoing, never-ending, integrated process requiring continuous reassessment and reformation. A particularly insightful model of strategy process dynamics comes from J. Moncrieff (1999). He recognized that strategy is partially deliberate and partially unplanned, though whether the resulting performance is better for being planned or not is unclear. The unplanned element comes from two sources : “emergent strategies” result from the emergence of opportunities and threats in the environment and “Strategies in action” are ad hoc actions by many people from all parts of the organization. These multitudes of small actions are typically not intentional, not teleological, not formal, and not even recognized as strategic. They are emergent from within the organization, in much the same way as “emergent strategies” are emergent from the environment. However, it is again not clear whether, or under what circumstances, strategies would be better if more planned. In this model, strategy is both planned and emergent, dynamic, and interactive. Five general processes interact. They are strategic intention, the organization's response to emergent environmental issues, the dynamics of the actions of individuals within the organization, the alignment of action with strategic intent, and strategic learning. The alignment of action with strategic intent (the top line in the diagram), is the blending of strategic intent, emergent strategies, and strategies in action, to produce strategic outcomes. The continuous monitoring of these strategic outcomes produces strategic learning (the bottom line in the diagram). This learning comprises feedback into internal processes, the environment, and strategic intentions. Thus the complete system amounts to a triad of continuously self-regulating feedback loops. Actually, quasi self-regulating is a more appropriate term since the feedback loops can be ignored by the organization. The system is self-adjusting only to the extent that the organization is prepared to learn from the strategic outcomes it creates. This requires effective leadership and an agile, questioning, corporate culture. In this model, the distinction between strategy formation and strategy implementation disappears. ## Criticisms of Dynamic Strategy Process Models Some detractors claim that these models are too complex to teach. No one will understand the model until they see it in action. Accordingly, the two part linear categorization scheme is probably more valuable in textbooks and lectures. Also, there are some implementation decisions that do not fit a dynamic model. They include specific project implementations. In these cases implementation is exclusively tactical and often routinized. Strategic intent and dynamic interactions influence the decision only indirectly. ## References • Barney, J. (1991). Firm resources and sustained competitive advantage. Journal of Management, Vol. 17, No. 1, pp. 99–120. • Bower, J. (1970). Managing the resource allocation process : A study of planning and investment, Graduate school of business (papers), Harvard University, Boston, 1970. • Burgelman, R. (1980). Managing Innovating systems: A study in the process of internal corporate venturing, Graduate school of business (PhD dissertation), Columbia University, 1980. • Dierickx, I. and Cool, K. (1989). Asset stock accumulation and sustainability of competitive advantage. Management Science, Vol. 35, pp. 1504–1511. • Forrester, J. (1961). Industrial Dynamics. MIT Press, Cambridge MA. • Grant, R. (1991). The resource-based theory of competitive advantage: implications for strategy formulation. California Management Review (Spring), pp. 119–135. • Lindblom, C. (1959). The science of muddling through, Public Administration Review, Vol. 19, No. 2, 1959, pp 79–81. • Lovallo, D. and Mendonca, L. (2007). Strategy’s Strategist: An interview with Richard Rumelt. The McKinsey Quarterly, 2007 No. 4, pp. 56–67. • Markides, C. (1999). A dynamic view of strategy. Sloan Management Review, vol 40, spring 1999, pp 55–63. • Markides, C. (1997). Strategic innovation. Sloan Management Review, vol 38, spring 1997, pp 31–42. • Moncrieff, J. (1999). Is strategy making a difference? Long Range Planning Review, vol 32, no 2, pp 273–276. • Mintzberg, H. (1978). Patterns in Strategy Formation, Management Science, Vol 24, No 9, 1978, pp 934–948. • Penrose, E. (1959). The Theory of the Growth of the Firm, Oxford University Press: Oxford. • Porter, M. (1980). Competitive Strategy, Free Press, New York. • Quinn, B. (1980). Strategies for Change: Logical Incrementalism, Irwin, Homewood Ill, 1980. • Rugman, A. and Verbeke, A. (2002). The contribution of Edith Penrose to the resource-based view of strategic management. Strategic Management Journal, 2002, Vol. 23, No. 8, pp. 769–780. • Rumelt, R. (1991). How Much Does Industry Matter?, Strategic Management Journal, Vol 12, pp 167–185. • Sterman, J. (2000). Business Dynamics: Systems thinking and modeling for a complex world. McGraw-Hill, New York. • Warren, K. (2002). Competitive Strategy Dynamics. Wiley, Chichester. • Warren, K. (2007). Strategic Management Dynamics. Wiley, Chichester. • Wernerfelt, B. (1984). A Resource-Based View of the Firm. Strategic Management Journal, Vol. 5, pp. 171–180.
2017-02-25 01:20:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34328699111938477, "perplexity": 3359.1337189025458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00036-ip-10-171-10-108.ec2.internal.warc.gz"}
http://piping-designer.com/index.php/properties/thermodynamics/1976-specific-heat-ratio
# Specific Heat Ratio Written by Jerry Ratzlaff on . Posted in Thermodynamics ## Specific Heat Ratio Specific heat ratio ( $$\gamma$$ ) (dimensionless number) (also called heat capacity ratio, adiabatic index, isentropic expansion factor) is the ratio of two specific heats or the ratio of the heat capacity at constant pressure to heat capacity at constant volume. ### Specific Heat Ratio Formula $$\large{ \gamma = \frac {C_p } {C_v} }$$ Where: $$\large{ \gamma }$$   (Greek symbol gamma) or $$\kappa$$   (Greek symbol kappa) = specific heat ratio $$\large{ C_p }$$ = specific heat constant pressure $$\large{ C_v }$$ = specific heat constant volume Tags: Equations for Heat
2018-04-23 01:50:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9784966707229614, "perplexity": 4655.7673983710265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945669.54/warc/CC-MAIN-20180423011954-20180423031954-00082.warc.gz"}
https://www.tutorialspoint.com/program-to-calculate-volume-of-ellipsoid-in-cplusplus
# Program to calculate volume of Ellipsoid in C++ C++Server Side ProgrammingProgramming #### C in Depth: The Complete C Programming Guide for Beginners 45 Lectures 4.5 hours #### Practical C++: Learn C++ Basics Step by Step Most Popular 50 Lectures 4.5 hours #### Master C and Embedded C Programming- Learn as you go Best Seller 66 Lectures 5.5 hours Given with r1, r2 and r3 the task is to find the volume of ellipsoid. An ellipsoid is a quadric surface, a surface that may be defined as the zero set of a polynomial of degree two in three variables. Among quadric surfaces, an ellipsoid is characterized by either of the two following properties. Formula used to calculate volume of ellipsoid Volume of Ellipsoid : (4/3) * pi * r1 * r2 * r3 ## Example Input-: r1 = 6.3, r2 = 43.4, r3 = 3.7 Output-: volume of ellipsoid is : 4224.87 ## Algorithm Start Step 1 -> define macro as #define pi 3.14 Step 2 -> Declare function to calculate Volume of ellipsoid float volume(float r1, float r2, float r3) return 1.33 * pi * r1 * r2 * r3 Step 3 -> In main() Declare variable as float r1 = 6.3, r2 = 43.4, r3 = 3.7 Volume(r1, r2, r3) Stop ## Example #include <bits/stdc++.h> #define pi 3.14 using namespace std; // Function to find the volume of ellipsoid float volume(float r1, float r2, float r3){ return 1.33 * pi * r1 * r2 * r3; } int main(){ float r1 = 6.3, r2 = 43.4, r3 = 3.7; cout << "volume of ellipsoid is : " << volume(r1, r2, r3); return 0; } ## Output volume of ellipsoid is : 4224.87 Updated on 20-Sep-2019 14:25:59
2022-11-29 14:40:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2392406463623047, "perplexity": 5340.894407886489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710698.62/warc/CC-MAIN-20221129132340-20221129162340-00690.warc.gz"}
https://artofproblemsolving.com/wiki/index.php/2014_AMC_12B_Problems/Problem_21
# 2014 AMC 12B Problems/Problem 21 ## Problem In the figure, $ABCD$ is a square of side length $1$. The rectangles $JKHG$ and $EBCF$ are congruent. What is $BE$? $[asy] pair A=(1,0), B=(0,0), C=(0,1), D=(1,1), E=(2-sqrt(3),0), F=(2-sqrt(3),1), G=(1,sqrt(3)/2), H=(2.5-sqrt(3),1), J=(.5,0), K=(2-sqrt(3),1-sqrt(3)/2); draw(A--B--C--D--cycle); draw(K--H--G--J--cycle); draw(F--E); label("A",A,SE); label("B",B,SW); label("C",C,NW); label("D",D,NE); label("E",E,S); label("F",F,N); label("G",G,E); label("H",H,N); label("J",J,S); label("K",K,W); [/asy]$ $\textbf{(A) }\frac{1}{2}(\sqrt{6}-2)\qquad\textbf{(B) }\frac{1}{4}\qquad\textbf{(C) }2-\sqrt{3}\qquad\textbf{(D) }\frac{\sqrt{3}}{6}\qquad\textbf{(E) } 1-\frac{\sqrt{2}}{2}$ ## Solutions ### Solution 1 Draw the altitude from $H$ to $AB$ and call the foot $L$. Then $HL=1$. Consider $HJ$. It is the hypotenuse of both right triangles $\triangle HGJ$ and $\triangle HLJ$, and we know $JG=HL=1$, so we must have $\triangle HGJ\cong\triangle JLH$ by Hypotenuse-Leg congruence. From this congruence we have $LJ=HG=BE$. Notice that all four triangles in this picture are similar. Also, we have $LA=HD=EJ$. So set $x=LJ=HG=BE$ and $y=LA=HD=EJ$. Now $BE+EJ+JL+LA=2(x+y)=1$. This means $x+y=\frac{1}{2}=BE+EJ=BJ$, so $J$ is the midpoint of $AB$. So $\triangle AJG$, along with all other similar triangles in the picture, is a 30-60-90 triangle, and we have $AG=\sqrt{3} \cdot AJ=\sqrt{3}/2$ and subsequently $GD=\frac{2-\sqrt{3}}{2}=KE$. This means $EJ=\sqrt{3} \cdot KE=\frac{2\sqrt{3}-3}{2}$, which gives $BE=\frac{1}{2}-EJ=\frac{4-2\sqrt{3}}{2}=2-\sqrt{3}$, so the answer is $\textbf{(C)}$. ### Solution 2 Let $BE = x$. Let $JA = y$. Because $\angle FKH = \angle EJK = \angle AGJ = \angle DHG$ and $\angle FHK = \angle EKJ = \angle AJG = \angle DGH$, $\triangle KEJ, \triangle JAG, \triangle GDH, \triangle HFK$ are all similar. Using proportions and the pythagorean theorem, we find $$EK = xy$$ $$FK = \sqrt{1-y^2}$$ $$EJ = x\sqrt{1-y^2}$$ Because we know that $BE+EJ+AJ = EK + FK = 1$, we can set up a systems of equations and solving for $x$, we get $$x + x\sqrt{1-y^2} + y = 1 \implies x= \frac{1-y}{1+\sqrt{1-y^2}}$$ $$xy + \sqrt{1-y^2} = 1 \implies x= \frac{1-\sqrt{1-y^2}}{y}$$ Now solving for $y$, we get $$\frac{1-y}{1+\sqrt{1-y^2}}=\frac{1-\sqrt{1-y^2}}{y} \implies y(1-y)=(1-\sqrt{1-y^2})(1+\sqrt{1-y^2}) \implies y-y^2=y^2 \implies y=\frac{1}{2}$$ Plugging into the second equations with $x$, we get $$x= 2\left(1-\sqrt{1-\frac{1}{4}}\right) = 2\left(\frac{2-\sqrt{3}}{2} \right) = \boxed{\textbf{(C)}\ 2-\sqrt{3}}$$ ### Solution 3 Let $BE = x$, $EK = a$, and $EJ = b$. Then $x^2 = a^2 + b^2$ and because $\triangle KEJ \cong \triangle GDH$ and $\triangle KEJ \sim \triangle JAG$, $\frac{GA}{1} = 1 - a = \frac{b}{x}$. Furthermore, the area of the four triangles and the two rectangles sums to 1: $$1 = 2x + GA\cdot JA + ab$$ $$1 = 2x + (1 - a)(1 - (x + b)) + ab$$ $$1 = 2x + \frac{b}{x}(1 - x - b) + \left(1 - \frac{b}{x}\right)b$$ $$1 = 2x + \frac{b}{x} - b - \frac{b^2}{x} + b - \frac{b^2}{x}$$ $$x = 2x^2 + b - 2b^2$$ $$x - b = 2(x - b)(x + b)$$ $$x + b = \frac{1}{2}$$ $$b = \frac{1}{2} - x$$ $$a = 1 - \frac{b}{x} = 2 - \frac{1}{2x}$$ By the Pythagorean theorem: $x^2 = a^2 + b^2$ $$x^2 = \left(2 - \frac{1}{2x}\right)^2 + \left(\frac{1}{2} - x\right)^2$$ $$x^2 = 4 - \frac{2}{x} + \frac{1}{4x^2} + \frac{1}{4} - x + x^2$$ $$0 = \frac{1}{4x^2} - \frac{2}{x} + \frac{17}{4} - x$$ $$0 = 1 - 8x + 17x^2 - 4x^3.$$ Then by the rational root theorem, this has roots $\frac{1}{4}$, $2 - \sqrt{3}$, and $2 + \sqrt{3}$. The first and last roots are extraneous because they imply $a = 0$ and $x > 1$, respectively, thus $x = \boxed{\textbf{(C)}\ 2-\sqrt{3}}$. ### Solution 4 Let $\angle FKH$ = $k$ and $CF$ = $a$. It is shown that all four triangles in the picture are similar. From the square side lengths: $$a + \sin(k) \cdot 1 + \cos(k) \cdot a = 1$$ $$\sin(k)a + \cos(k) = 1$$ Solving for $a$ we get: $$a = \frac{1-\sin(k)}{\cos(k) + 1} = \frac{1 - \cos(k)}{\sin(k)}$$ $$(1-\sin(k)) \cdot sin(k) = (1 - \cos(k))\cdot(\cos(k) + 1)$$ $$\sin(k)-\sin(k)^2 = \cos(k) + 1 - \cos(k)^2 - \cos(k)$$ $$\sin(k)-\sin(k)^2 = \sin(k)^2$$ $$1-\sin(k) = \sin(k)$$ $$\sin(k) = \frac{1}{2}, \cos(k) = \frac{\sqrt 3}{2}$$ $$a = \frac{1 - \frac{\sqrt 3}{2}}{\frac{1}{2}} = 2 - \sqrt 3$$ ### Solution 5 Note that $HJ$ is a diagonal of $JKHG$, so it must be equal in length to $FB$. Therefore, quadrilateral $FHJB$ has $FH\parallel BJ$, and $FB=HJ$, so it must be either an isosceles trapezoid or a parallelogram. But due to the slope of $FB$ and $HJ$, we see that it must be a parallelogram. Therefore, $FH=BJ$. But by the symmetry in rectangle $FEAD$, we see that $FH=JA$. Therefore, $BJ=FH=JA$. We also know that $BJ+JA=1$, hence $BJ=JA=\frac12$. As $JG=1$ and $JA=\frac12$, and as $\triangle GJA$ is right, we know that $\triangle GJA$ must be a 30-60-90 triangle. Therefore, $GA=\sqrt{3}/2$ and $DG=1-\sqrt{3}/2$. But by similarity, $\triangle DHG$ is also a 30-60-90 triangle, hence $DH=\sqrt{3}-3/2$. But $\triangle DHG\cong\triangle EJK$, hence $EJ=\sqrt{3}-3/2$. As $BJ=1/2$, this implies that $BE=BJ-EJ=1/2-\sqrt{3}+3/2=2-\sqrt{3}$. Thus the answer is $\boxed{\textbf{(C)}}$.
2021-05-18 22:51:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 114, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9375894665718079, "perplexity": 88.70721344062324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989874.84/warc/CC-MAIN-20210518222121-20210519012121-00403.warc.gz"}