url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://physics.paperswithcode.com/paper/reducing-autocorrelation-times-in-lattice
# Reducing Autocorrelation Times in Lattice Simulations with Generative Adversarial Networks 8 Nov 2018  ·  Jan M. Pawlowski, Julian M. Urban · Short autocorrelation times are essential for a reliable error assessment in Monte Carlo simulations of lattice systems. In many interesting scenarios, the decay of autocorrelations in the Markov chain is prohibitively slow. Generative samplers can provide statistically independent field configurations, thereby potentially ameliorating these issues. In this work, the applicability of neural samplers to this problem is investigated. Specifically, we work with a generative adversarial network (GAN). We propose to address difficulties regarding its statistical exactness through the implementation of an overrelaxation step, by searching the latent space of the trained generator network. This procedure can be incorporated into a standard Monte Carlo algorithm, which then permits a sensible assessment of ergodicity and balance based on consistency checks. Numerical results for real, scalar $\phi^4$-theory in two dimensions are presented. We achieve a significant reduction of autocorrelations while accurately reproducing the correct statistics. We discuss possible improvements to the approach as well as potential solutions to persisting issues. PDF Abstract ## Categories High Energy Physics - Lattice Computational Physics
2023-01-31 20:59:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7318758964538574, "perplexity": 1501.962765821918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499890.39/warc/CC-MAIN-20230131190543-20230131220543-00827.warc.gz"}
http://mathhelpforum.com/algebra/52669-one-more-please-am-i-doing-right.html
# Thread: One more please, am I doing this right? 1. ## One more please, am I doing this right? I just want to make sure I am doing this correctly. 2(x+1) -3(x-2)> 5 2x + 2 - 3x + 6 > 5 x + 2 + 6 > 5 x + 8 > 5 x > -8 + 5 final answer is x > -3 2. Originally Posted by tpoma I just want to make sure I am doing this correctly. 2(x+1) -3(x-2)> 5 2x + 2 - 3x + 6 > 5 x + 2 + 6 > 5 x + 8 > 5 x > -8 + 5 final answer is x > -3 look at 2x-3x =/= x. that will change your answer, otherwise it "would" have been correct. 3. So instead of being a -3 it should have been a possitive 3 making the final answer x < 3 Right??? 4. Originally Posted by tpoma So instead of being a -3 it should have been a possitive 3 making the final answer x < 3 Right??? yes you are right.
2018-04-20 22:39:31
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8498567938804626, "perplexity": 773.4513051850455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944742.25/warc/CC-MAIN-20180420213743-20180420233743-00025.warc.gz"}
http://cnx.org/content/m34633/latest/?collection=col11207/latest
OpenStax_CNX You are here: Home » Content » Introduction to XML » Handling Slider Change Events in Flex 3 and Flex 4 Recently Viewed This feature requires Javascript to be enabled. Inside Collection (Course): Course by: R.G. (Dick) Baldwin. E-mail the author Handling Slider Change Events in Flex 3 and Flex 4 Module by: R.G. (Dick) Baldwin. E-mail the author Summary: Learn how to write inline event handler code to handle slider change events. Preface General This lesson is part of a series of tutorial lessons dedicated to programming with Adobe Flex. The material in this lesson applies to both Flex 3 and Flex 4. Viewing tip I recommend that you open another copy of this document in a separate browser window and use the following links to easily find and view the figures and listings while you are reading about them. Figures • Figure 1 . Browser image at startup for the Flex 3 project. • Figure 2 . A toolTip on the slider. • Figure 3 . Changing the height of the image. • Figure 4 . Flex Builder 3 Components tab exposed. • Figure 5 . Drag controls onto the Flex Builder 3 Design tab. • Figure 6 . The Flex Builder 3 Properties tab exposed. • Figure 7 . Browser image at startup for the hybrid project. • Figure 8 . Flex Builder 4 Components tab exposed. • Figure 9 . Drag controls onto the Flash Builder 4 Design tab. Supplemental material I also recommend that you study the other lessons in my extensive collection of online programming tutorials. You will find a consolidated index at www.DickBaldwin.com . General background information If you learn how to program using ActionScript 3, you will probably integrate large amounts of ActionScript code into your Flex projects to provide complex event handling. In the meantime, it is possible to provide simple event handlers in Flex by embedding a very small amount of ActionScript code in your Flex code. I will illustrate and explain that capability in this lesson. Preview I encourage you to run the online version of the programs from this lesson before continuing. Three Flex projects I will present and explain three Flex projects in this lesson. The first project is named SliderChangeEvent01 . This project was first developed using Flex Builder 3 and later developed using the Flex 3 compiler and Flash Builder 4. The results were essentially the same in both cases. This project uses classes from the Flex 3 (mx) library exclusively. (The screen shots shown in Figure 4, Figure 5, and Figure 6 are from Flex Builder 3.) The second project The second project is named SliderChangeEvent02 . This project was developed using the Flex 4 compiler and Flash Builder 4. This is a hybrid project that uses classes from both the Flex 3 (mx) library and the Flex 4 (spark) library. The behavior of this project is similar to the behavior of the other project, but they look different in several ways that I will explain later. The third project The third project is named SliderChangeEvent03 . This project is a modification of the hybrid project in which the classes used are drawn exclusively from the Flex 4 spark library. Order of upcoming explanations I will explain the Flex 3 project named SliderChangeEvent01 in detail. Then I will explain the differences between that project and the hybrid project named SliderChangeEvent02 . Finally, I will explain the differences between that project and the project named SliderChangeEvent03 . SliderChangeEvent01 output image at startup The project named SliderChangeEvent01 starts running in Flash Player with the image shown in Figure 1 appearing in the browser. The image that you see in Figure 1 consists of two Flex 3 Label controls, one Flex 3 HSlider control, and one Flex 3 Image control arranged vertically and centered in the browser window. The Application container All XML documents must have a root element. The root of a Flex 3 application is a container element that is often called the Application container. (You can learn all about the Application container class at Adobe Flex 3.5 Language Reference .) Briefly, the Application container, (which corresponds to the root element in the Flex XML code) , holds all other containers and components. Vertical layout By default, the Flex 3 Application container lays out all of its children vertically as shown in Figure 1. (As you will see later, this is not the case for the Flex 4 Application container.) The default vertical layout occurs when the layout attribute is not specified as is the case in this application. According to the Adobe Flex 3.5 Language Reference , the layout property: "Specifies the layout mechanism used for this application. Applications can use "vertical", "horizontal", or "absolute" positioning. Vertical positioning lays out each child component vertically from the top of the application to the bottom in the specified order. Horizontal positioning lays out each child component horizontally from the left of the application to the right in the specified order. Absolute positioning does no automatic layout and requires you to explicitly define the location of each child component. The default value is vertical." A toolTip on the slider If you point to the slider with your mouse, a tool tip showing the word Height will appear as shown in Figure 2. The slider's thumb The little triangle that you see on the slider in these images is often referred to as the slider's thumb . As you will see later, the position of the thumb is intended to represent the height of the image below the slider. The left end of the slider represents a height of 100 pixels and the right end represents a height of 250 pixels (which just happens to be the actual height of the raw image) . Changing the height of the image If you grab the thumb with the mouse and move it to the left or the right, two obvious visual effects occur. The first is that the value represented by the current position of the thumb is displayed above the thumb as shown in Figure 3. (As you will see later, the value is also displayed in a Flex 4 application, but by default the appearance is white numerals on a black background.) The second visual effect The second visual effect of moving the thumb is that the height of the image changes to the value represented by the position of the thumb on the slider. (An Image object has a property named maintainAspectRatio . By default, the value of this property is true. Therefore, when the height is changed, the width changes in a proportional manner.) Note that the upper-left corner of the image remains anchored to the same point as the height of the image changes as shown in Figure 3. Discussion and sample code The Flex 3 project named SliderChangeEvent 01 Creating the layout Once you create your new Flex 3 Project, there are at least three ways that you can create your layout using Flex Builder 3: 1. Select the Design tab in the upper-middle panel of the IDE (see Figure 5) and drag your containers, controls, and other components from the Components tab onto your design window. 2. Select the Source tab in Figure 5 and write the raw XML code that defines your layout. 3. A combination of 1 and 2 above Expose the components tab When you select the Design tab in the upper-middle window of the IDE, the lower-left window changes to the appearance shown in Figure 4 with the Flex 3 (mx) Components tab exposed. The list of available components that you see in Figure 4 also appears when you create a new project in Flash Builder 4 and specify the use of the Flex 3 compiler. A list of available components Although they aren't all shown in Figure 4 due to space limitations, the Flex Builder 3 Components tab lists all of the components that you can use in your Flex application grouped into the following categories: • Custom • Controls • Layout • Navigators • Charts Expose the design window Selecting the Design tab mentioned above also exposes the Flex Builder 3 design window shown in Figure 5. A similar design window is exposed when you create a new project in Flash Builder 4 specifying the Flex 3 compiler and then select the Design tab. The purpose is the same but some of the items at the top of the design window in Flash Builder 4 are different. Drag components onto the Design tab You can drag components from the Components tab shown in Figure 4 onto the Design tab shown in Figure 5 to create your layout in the Flex Builder 3 design mode or in the Flash Builder 4 design mode. As you do that, the corresponding XML code is automatically generated for you. For example, Figure 5 shows the results of dragging two Label controls, one HSlider control, and one Image control from the Components tab of Figure 4 to the Design tab of Figure 5. (No attempt has been made to set property values on any of the controls shown in Figure 5.) . XML code before setting properties If you select the Source tab at this point, you will see the XML code shown in Listing 1. Listing 1: XML code before setting properties for SliderChangeEvent01. <?xml version="1.0" encoding="utf-8"?> <mx:Label text="Label"/> <mx:Label text="Label"/> <mx:HSlider/> <mx:Image/> </mx:Application> Compile and run As you can see, the XML code in Listing 1 is pretty sparse. You could compile and run the application at this point. All you would see would be two labels each containing the text Label and a slider covering the default numeric range from 0 to 10. Put some meat on the bones We will need to put some meat on the bones of this skeleton mxml code in order to create our Flex application. We can accomplish that by setting attribute values that correspond to properties of the controls. Setting attribute values Once again, we have three choices: 1. Go hardcore and edit the XML code shown in Listing 1 to add the necessary attributes. 2. Stay in Design mode, select each component in the Design tab, and use the Flex Properties tab shown in Figure 6 to set the properties on that component. 3. A combination of 1 and 2 above. The Flex Properties tab When you select the Design tab shown in Figure 5, the Flex Properties tab shown in Figure 6 appears in the bottom-right of the IDE. The appearance of the Flex Properties tab depends on which component is selected in the Design tab. Figure 5 shows one of the Label controls selected, and Figure 6 shows the Flex Properties tab corresponding to a Label control. You will see a very similar properties tab if you create a new Flash Builder 4 project and specify use of the Flex 3 compiler. Some of the items are in different locations than Figure 6 but it appears that the Flash Builder 4 properties tab has the same items for a Flex 3 project. A variety of user input controls The Flex Properties tab contains a variety of user input controls that allow you to specify values for many of the commonly used properties that are allowed for the selected component. Note, however, that the documentation for the Label control lists many properties that are not supported by the Flex Properties tab shown in Figure 6. You can increase the number of properties shown in the tab by selecting one of the controls at the top of the tab that converts the display into an alphabetical list of properties. However, even this doesn't seem to show all of the properties defined by and inherited by some components. If you need to set properties that are not supported by the Flex Properties tab, you probably have no choice but to select the Source tab shown in Figure 5 and write mxml code for those properties. Will explain the code in fragments I will explain the code for this Flex application in fragments. A complete listing of the application is provided in Listing 6 near the end of the lesson. Beginning of XML code for SliderChangeEvent01 The primary purpose of this application is to illustrate the use of inline event handling for Flex 3 slider change events. The application begins in Listing 2 which shows the beginning of the Application element and the two complete Label elements shown at the top of Figure 1. Listing 2: Beginning of XML code for SliderChangeEvent01. <?xml version="1.0" encoding="utf-8"?> <mx:Label text="Put your name here" fontSize="16" fontWeight="bold"/> <mx:Label text="Image Height in Pixels" fontWeight="bold" fontSize="14"/> Make it easy - drag and drop The two Label elements were created by dragging Label controls from the Components tab shown in Figure 4 onto the Design tab shown in Figure 5. Then the attribute values were set using the Flex Properties tab shown in Figure 6. The attributes shown in Listing 2 represent common properties of a text label and shouldn't require further explanation. Create and condition the slider Listing 3 adds a horizontal slider (HSlider) control to the application and sets the attributes that control both its appearance and its behavior. Listing 3: Create and condition the slider. <mx:HSlider minimum="100" maximum="250" value="250" toolTip="Height" change="myimg.height=event.currentTarget.value" liveDragging="true" /> The slider is a little more complicated than a label and deserves a more thorough explanation. The numeric properties Recall that a slider represents a range of numeric values. The position of the thumb at any instant in time selects a value from that range. The following three attributes shown in Listing 3 deal with the slider and its numeric range: • minimum - the numeric value represented by the left end of a horizontal slider. • maximum - the value represented by the right end of a horizontal slider. • value - the value that specifies the initial position of the thumb when the slider is constructed and first presented in the application's window. The toolTip property As you have probably already guessed, the value of the toolTip property specifies the text that appears in the tool tip when it is visible as shown in Figure 2. The change property This is where thing get a little more interesting. As the user moves the thumb to the left or right, the slider fires a continuous stream of change events. You might think of this as the slider yelling out "Hey, the position of my thumb has been changed." over and over as the thumb is being moved. (Also see the discussion of the liveDragging property later.) An event handler The value that is assigned to the change attribute in Listing 3 is often referred to as an event handler . This value specifies what the application will do each time the slider fires a change event. Three ways to handle events in Flex There are at least three ways to handle event notifications in Flex: • Registering an event handler in mxml • Creating an inline event handler in the mxml definition • Registering an event listener through ActionScript The XML code in Listing 3 uses the inline approach. The inline approach The advantage of using the inline approach, (at least insofar as my Introduction to XML , students are concerned) , is that it doesn't require you to create a Script element within the mxml or to create a separate ActionScript file. Handling the slider's change event Now consider the code that begins with the word change in Listing 3. The code within the quotation marks can be a little hard to explain, but I will give it a try. (The code in quotation marks is actually an ActionScript code fragment.) Think of it this way There is a Flex/ActionScript class named Event . The reference to event in Listing 3 is a reference to an object of the Event class that comes into being each time the slider fires a change event . The Event object encapsulates a property named currentTarget , which is described in the Flex 3 documentation as follows: "The object that is actively processing the Event object with an event listener. For example, if a user clicks an OK button, the current target could be the node containing that button or one of its ancestors that has registered an event listener for that event." The currentTarget is the slider In this application, the value of currentTarget points to the slider which is firing change events as the user moves the thumb. The value property of an HSlider object The slider is an object of the HSlider class, which has a property named value . The value property contains the current position of the thumb and is a number between the minimum and maximum property values. Get the current value of the thumb Therefore, each time the slider fires a change event, the code on the right side of the assignment operator within the highlighted quotation marks in Listing 3 gets the numeric value that indicates the current position of the thumb. Cause the image to be resized This value is assigned to the height property of the image, causing the overall size of the image to be adjusted, if necessary, to match the current position of the slider's thumb. (I could go into more detail as to the sequence of events that causes the size of the image to change, but I will leave that as an exercise for the student.) The liveDragging property That leaves one more attribute or property for us to discuss in Listing 3: liveDragging . This one is much easier to understand. The Flex 3 documentation has this to say about the liveDragging property: "Specifies whether live dragging is enabled for the slider. If false, Flex sets the value and values properties and dispatches the change event when the user stops dragging the slider thumb. If true, Flex sets the value and values properties and dispatches the change event continuously as the user moves the thumb. The default value is false." If liveDragging is false... If you were to modify the code in Listing 3 to cause the value of the liveDragging property to be false (or simply not set the attribute value to true) , the slider would only fire change events each time the thumb stops moving (as opposed to firing a stream of events while the thumb is moving) . This, in turn, would cause the size of the image to change only when the thumb stops moving instead of changing continuously while the thumb is moving. An Image control The Flex 3 documentation tells us: The Image control lets you import JPEG, PNG, GIF, and SWF files at runtime. You can also embed any of these files and SVG files at compile time by using @Embed(source='filename'). The primary output that is produced by compiling a Flex application is an swf file that can be executed in Flash Player. The documentation goes on to explain that by using @Embed , you can cause resources such as images to be embedded in the swf file. The advantage to embedding is that embedding the resource eliminates the requirement to distribute the resource files along with the swf files. The disadvantage is that it causes the swf file to be larger. Import an image Listing 4 imports an image from the file named myimage.jpg that is located in the src folder of the project tree. This image is embedded in the swf file when the Flex application is compiled. Listing 4: Import an image. <mx:Image id="myimg" source="@Embed('myimage.jpg')" height="250"> </mx:Image> </mx:Application> The id property Setting the id property on the image to myimg makes it possible to refer to the image in the change-event code in Listing 3. (Note that there is no requirement to set the value of the id property to be the same as the name of the image file as was done in Listing 4.) The height property Setting the height property of the image to 250 pixels in Listing 4 causes the image height to be 250 pixels when it is first displayed as shown in Figure 1. The end of the application Listing 4 contains the closing tag for the Application element signaling the end of the Flex 3 application named SliderChangeEvent01 . The hybrid Flex3-4 project named SliderChangeEvent02 The mxml project code The mxml code for this project is shown in its entirety in Listing 7. If you examine this code you will see that: • It uses a Flex 4 spark s:Application element instead of a Flex 3 mx:Application element. • It declares the standard set of Flex 4 namespaces. • It uses a spark s:VGroup element as the container for the following Flex 3 components. (Note that the Flex 3 project in Listing 6 doesn't require another container in addition to the mx:Application container.) : • mx:Label • mx:Label • mx:HSlider • mx:Image Otherwise, the mxml code for this project is the same as the code for the Flex 3 project shown in Listing 6. The mixture of spark and mx components causes this to be a hybrid Flex 3-4 project. Visual appearance of the project If you run the online version of the project named SliderChangeEvent02 , you should see an initial screen display similar to Figure 7. Compare with the Flex 3 project By comparing this screen output for the hybrid project with the screen output for the Flex 3 project in Figure 1, you can immediately spot several significant differences: • The background is white instead of gray. • The labels, the slider, and the image are not centered horizontally in the browser window. • The appearance of the thumb on the slider is a circle instead of a triangle. Behavior of the project If you move the slider with the mouse, you will see that the behavior is essentially the same as the Flex 3 version of the project, including the display of a tool tip as shown in Figure 2 and the display of the slider value as shown in Figure 3. The spark s:Application element The Flex 4 spark s:Application element differs from the Flex 3 mx:Application element in several ways including the following: • Default layout: Unlike the mx:Application element, the s:Application element does not have a default vertical layout. By default, all components placed in the s:application element are placed in the upper-left corner. (The s:VGroup container element was used in Listing 7 to resolve this issue.) • Default background color: The default background color of the s:Application element is white, whereas the default background color of the mx:Application element is gray. • Horizontal positioning: Unlike the mx:Application element which centers it components horizontally by default, the default position of components placed in the s:Application element is the upper-left corner. The spark s:VGroup element As shown in Listing 7, the s:VGroup element can be used to arrange the components in a vertical sequence from top to bottom. However, placing components in an s:VGroup element does not cause them to be centered horizontally. Instead, by default, the components end up on the left side of the container as shown in Figure 7. If you want the components to be centered, you must write additional code to cause that to happen. The Flash Builder 4 Components tab When you create a new Flex project in Flash Builder 4, if you specify the use of the Flex 3 compiler, the Components tab in the resulting IDE will look like Figure 4. However, if you specify the Flex 4 compiler when you create the new project, the Components tab will look like Figure 8. Numerous differences If you compare Figure 8 with Figure 4, you will see numerous differences between the two lists. Some of the names are the same and some of the names are different. Even though some of the names are the same, most of the components that you see in Listing 8 are Flex 4 spark components and the components that you see in Figure 4 are Flex 3 mx components and they are represented by different classes in the class library. However, as you will see later, the Image component in the Flex 4 Component list is actually a Flex 3 mx component. That may also be the case for some of the other components as well. Same name doesn't guarantee same appearance or same behavior While the appearance and behavior of a Flex 4 spark component may be the same as the appearance and behavior of a Flex 3 mx component with the same name, there is no guarantee that will be the case. They are entirely different components and the only way you can be sure is to study the documentation. As you saw earlier, the appearance of an mx:HSlider used inside an s:Application element is different from an mx:HSlider used inside an mx:Application element. Therefore, if the appearance and behavior of the components in your project are really critical, you should probably avoid mixing Flex 3 and Flex 4 components. No drag-and-drop support for hybrid projects Another ramification of the fact that the components in Figure 8 are spark components is that you cannot create hybrid projects using drag-and-drop programming alone. If you drag the components in Figure 8 into the design pane of Flash Builder 4, your project will be populated with Flex 4 spark components. If you want the project to be populated with Flex 3 mx components, you will have to manually edit the mxml code to accomplish that. That may be another reason to avoid hybrid projects. Drag-and-drop results Figure 9 shows the results of dragging one VGroup layout, two Label controls, one HSlider control, and one Image control from the Flash Builder 4 Components panel into the Design panel. Compare Figure 9 with Figure 5 to see the differences in background color and layout. Mxml code for the layout shown in Figure 9 Listing 5 shows the Flex 4 mxml code that corresponds to the layout shown in Figure 9. Listing 5: Mxml code for the layout shown in Figure 9. <?xml version="1.0" encoding="utf-8"?> minWidth="955" minHeight="600"> <fx:Declarations> <!-- Place non-visual elements (e.g., services, value objects) here --> </fx:Declarations> <s:VGroup x="0" y="0" width="200" height="200"> <s:Label text="Label"/> <s:Label text="Label"/> <s:HSlider/> <mx:Image/> </s:VGroup> </s:Application> Mostly Flex 4 spark components The most important thing to note about Listing 5 is that the VGroup , Label , and HSlider components that were dragged from the Component tab shown in Figure 8 into the Design panel shown in Figure 9 are all declared using the spark (s) namespace. Curiously, however, the Image control shows up in the code as mx:Image instead of s:Image . The Flex 4 project named SliderChangeEvent03 Updating the mxml code in Listing 5 by applying the properties from Listing 7 to the Label and HSlider components produces the complete Flex 4 project named SliderChangeEvent03 shown in Listing 8. If you run the online versions of SliderChangeEvent0 2 and SliderChangeEvent03 side-by-side in different browser windows, you will probably notice a few subtle differences in the look and feel of the two programs. Here are some of the differences that I have noticed: • There is less space between the labels and the slider in the Flex 4 version. • There is less space between the edge of the Flash window and the top and left ends of the labels and the slider in the Flex 4 version. • The overall length of the slider is shorter in the Flex 4 version. • The treatment of the little popup window that shows the value of the slider is different between the two. It has black letters on a cream-colored background in the hybrid version as in Figure 3, but it has white letters on a black background in the Flex 4 version. Run the programs I encourage you to run the online versions of the programs from this lesson. Then copy the code from Listing 6, Listing 7, and Listing 8. Use that code to create Flex projects of your own. Compile and run your projects. Experiment with the code, making changes, and observing the results of your changes. Make certain that you can explain why your changes behave as they do. Resources I will publish a list containing links to Flex resources as a separate document. Search for Flex Resources in the Connexions search box. Complete program listing Complete listings of the programs discussed in this lesson are shown in Listing 6, Listing 7, and Listing 8 below. Listing 6: Complete listing of SliderChangeEvent01. <?xml version="1.0" encoding="utf-8"?> <!-- SliderChangeEvent01 Illustrates the use of inline event handling for slider change events. --> <mx:Label text="Put your name here" fontSize="16" fontWeight="bold"/> <mx:Label text="Image Height in Pixels" fontWeight="bold" fontSize="14"/> <mx:HSlider minimum="100" maximum="250" value="250" toolTip="Height" change="myimg.height=event.currentTarget.value" liveDragging="true" /> <mx:Image id="myimg" source="@Embed('myimage.jpg')" height="250"> </mx:Image> </mx:Application> Listing 7: Complete listing of SliderChangeEvent02. <?xml version="1.0" encoding="utf-8"?> <s:Application <s:VGroup> <mx:Label fontWeight="bold"/> <mx:Label text="Image Height in Pixels" fontWeight="bold" fontSize="14"/> <mx:HSlider minimum="100" maximum="250" value="250" toolTip="Height" change="myimg.height=event.currentTarget.value" liveDragging="true" /> <mx:Image id="myimg" source="@Embed('myimage.jpg')" height="250"> </mx:Image> </s:VGroup> </s:Application> Listing 8: Complete listing of SliderChangeEvent03. <?xml version="1.0" encoding="utf-8"?> minWidth="955" minHeight="600"> <s:VGroup x="0" y="0" width="200" height="200"> <s:Label fontWeight="bold"/> <s:Label text="Image Height in Pixels" fontWeight="bold" fontSize="14"/> <s:HSlider minimum="100" maximum="250" value="250" toolTip="Height" change="myimg.height=event.currentTarget.value" liveDragging="true" /> <mx:Image id="myimg" source="@Embed('myimage.jpg')" height="250"/> </s:VGroup> </s:Application> Miscellaneous Note: Housekeeping material • Module name: Handling Slider Change Events in Flex 3 and Flex 4 • Files: • Flex0104\Flex0104.htm • Flex0104\Connexions\FlexXhtml0104.htm Note: PDF disclaimer: Although the Connexions site makes it possible for you to download a PDF file for this module at no charge, and also makes it possible for you to purchase a pre-printed version of the PDF file, you should be aware that some of the HTML elements in this module may not translate well into PDF. -end- Content actions PDF | EPUB (?) What is an EPUB file? EPUB is an electronic book format that can be read on a variety of mobile devices. PDF | EPUB (?) What is an EPUB file? EPUB is an electronic book format that can be read on a variety of mobile devices. Collection to: My Favorites (?) 'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'. | A lens I own (?) Definition of a lens Lenses A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust. What is in a lens? Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content. Who can create a lens? Any individual member, a community, or a respected organization. What are tags? Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens. | External bookmarks Module to: My Favorites (?) 'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'. | A lens I own (?) Definition of a lens Lenses A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust. What is in a lens? Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content. Who can create a lens? Any individual member, a community, or a respected organization. What are tags? Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens. | External bookmarks
2014-04-19 02:00:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1883111596107483, "perplexity": 1795.6225844998937}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00521-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.helpteaching.com/questions/289951/given-the-following-chemical-equation-2n2h4g-n2o4g-rarr-3n2g
##### Question Info This question is public and is used in 156 tests or worksheets. Type: Multiple-Choice Category: Stoichiometry Score: 1 Author: agpecho View all questions by agpecho. # Stoichiometry Question View this question. Add this question to a group or test by clicking the appropriate button below. $2N_2H_4(g) + N_2O_4(g) rarr 3N_2(g) + 4H_2O(g)$ When 16.0 g of $N_2H_4$ (gram-formula mass=32 g/mol) and 184 g of $N_2O_4$ (gram-formula mass=92 g/mol) react based on the chemical equation above, what is the maximum mass of water that can be produced?
2019-03-25 08:14:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2603614628314972, "perplexity": 8309.521371037383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203842.71/warc/CC-MAIN-20190325072024-20190325094024-00057.warc.gz"}
https://www.r-bloggers.com/2017/06/weighted-population-density/
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. DENSITY THE AVERAGE PERSON EXPERIENCES CLICK TO ENLARGE In Alaska, there’s about one person for each square mile of land. You might picture a typical Alaskan not being able to see the next house. But it’s not that way of course. Most of Alaska is uninhabited. People have crowded into a few areas. The average Alaskan experiences a population density of about 72 people per square mile. That’s a lot more than one. In the R code below, we roughly estimate the weighted population density for each US state, that is, the population density that the average person experiences. We do this for each state by taking the average of its counties’ population densities, weighted by the population of each county. It would be even better to do this for smaller areas, such as census tracts, but we were too lazy to chase down the data. In the figure at the top of this post, we see the weighted and unweighted population densities for each state. Note how New Jersey has a higher population density than New York, but when you look at what the average person experiences, it flips. Amazingly, the average person in New York State shares a square mile with more than 10,000 other residents! What states have the biggest ratios of weighted to unweighted densities? The chart below shows states with a 10x or greater ratio in blue. CLICK TO ENLARGE Here are the states with the biggest ratios: Alaska – The average person experiences a population density that is 62 times greater than the state’s density. New York – 39 times Utah – 21 times Minnesota – 14 times Oregon – 13 times New Mexico – 12 times Kansas – 12 times Texas – 12 times Illinois – 12 times R CODE FOR THOSE WHO WISH TO FOLLOW ALONG AT HOME library(tidyverse) library(ggrepel) library(scales) setwd("C:/Dropbox/Projects/20170605_Population_Density") #source https://factfinder.census.gov/bkmk/table/1.0/en/DEC/10_SF1/GCTPH1.US05PR df <- read_csv("DEC_10_SF1_GCTPH1.US05PR.csv", skip = 1) names(df) = c( "id", "state", "country", "geo_id", "geo_id_suffix", "geographic_area", "county_name", "population", "housing_units", "total_area", "water_area", "land_area", "density_population_sqmi_land", "density_housing_units_sqmi_land" ) #drop puerto rico and DC. sorry guys! df = df %>% filter(geo_id != "0400000US72") %>% filter(geo_id != "0500000US11001") %>% filter(geo_id != "0400000US11") #make a state data frame with just four facts for each state (for later joining) sdf = df %>% filter(!is.na(geo_id_suffix)) %>% filter(stringr::str_length(geo_id_suffix) < 5) %>% #states have short geoids mutate( state = stringr::str_sub(geo_id_suffix, 1, 2), geographic_area = stringr::str_sub(geographic_area, 16, stringr::str_length(geographic_area)) ) %>% select(state, geographic_area, population, density_population_sqmi_land) names(sdf) = c("state", "geographic_area", "state_pop", "state_density") #clean up county data, dropping irrelevant cols df = df %>% filter(!is.na(geo_id_suffix)) %>% filter(stringr::str_length(geo_id_suffix) == 5) %>% #counties have geoids of length 5 mutate(state = stringr::str_sub(geo_id_suffix, 1, 2)) %>% select( #drop unneeded columns -id,-country,-geo_id,-housing_units,-total_area, -water_area,-density_housing_units_sqmi_land ) #join the state data with the county data result = left_join(df, sdf, by = "state") %>% group_by(state) %>% summarise(weighted_density = round(sum( population / state_pop * density_population_sqmi_land ), 0)) %>% ungroup() %>% left_join(sdf, .) %>% arrange(-weighted_density) %>% #mark states with weighted density 10x higher than unweighted density mutate(highlight = weighted_density / state_density > 10) write_csv(result, "result.csv") #graphit, Schulte style p = ggplot(result, aes(x = state_density, y = weighted_density, color = highlight)) + theme_bw() + scale_x_log10(breaks = c(1, 3, 10, 30, 100, 300, 1000, 3000, 10000), label = comma) + scale_y_log10(breaks = c(1, 3, 10, 30, 100, 300, 1000, 3000, 10000), label = comma) + geom_point() + geom_text_repel(aes(label = geographic_area)) + geom_abline(slope = 1) + theme(legend.position = "none") + labs(x = "Unweighted Population Density", y = "Weighted Population Density") p ggsave( plot = p, file = "unweighted_v_weighted_density.png", height = 8, width = 8 ) #make a long version of result with two rows per state result_l = result %>% mutate(sortval = weighted_density) %>% gather(measure, density, state_density:weighted_density) %>% arrange(sortval, measure) %>% mutate(measure = factor(measure, levels = c("weighted_density", "state_density"))) #graph it p = ggplot(result_l, aes( x = density, # make the rows be states sorted by weighted density y = reorder(geographic_area, sortval), color = measure )) + theme_bw() + geom_point(size = 3) + #connect the two measures for each state with a line geom_line(aes(group = geographic_area), color = "black") + scale_x_log10(breaks = c(10, 30, 100, 300, 1000, 3000, 10000), label = comma) + theme(legend.position = "bottom") + labs(x = "Population density", y = "States ranked by weighted population density") + scale_color_discrete( name = "", breaks = c("weighted_density", "state_density"), labels = c("Weighted Population Density", "Unweighted Population Density") ) p ggsave( plot = p, file = "state_v_unweighted_and_weighted_density.png", height = 8, width = 6 ) H/T Jake Hofman for getting me to do this and talking R. H/T to Hadley Wickham for creating the tidyverse. The post Weighted population density appeared first on Decision Science News.
2021-03-01 12:46:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24183371663093567, "perplexity": 11959.619054184432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362513.50/warc/CC-MAIN-20210301121225-20210301151225-00360.warc.gz"}
http://mathhelpforum.com/trigonometry/41761-trignometry-print.html
# Trignometry • June 17th 2008, 12:20 AM lemontea Trignometry 1) Simpifly sin^2 (sin^-1 (4/x)) 2) a,b (0,2pi)and sin (a)=sqrt(3)/2, cos(b)=-sqrt(3)/2, tan(a+b)=-1/sqrt(3), then give the exact value of a and b in terms of pi a: ____pi b:_____pi • June 17th 2008, 12:46 AM TheEmptySet Quote: Originally Posted by lemontea 2) a,b (0,2pi)and sin (a)=sqrt(3)/2, cos(b)=-sqrt(3)/2, tan(a+b)=-1/sqrt(3), then give the exact value of a and b in terms of pi a: ____pi b:_____pi There are two angles such that $\sin(a)=\frac{\sqrt{3}}{2} \\\ a=\frac{\pi}{3}\mbox{ or }\frac{2\pi}{3}$ Also with cosine $\cos(b)=-\frac{3}{2} \\\ b=\frac{5\pi}{6} \mbox{ or } \frac{7\pi}{6}$ and $\tan(a+b)=-\frac{1}{\sqrt{3}} \\\ a+b=\frac{5\pi}{6} \mbox{ or } \frac{11\pi}{6}$ So for this to be true $a=\frac{2\pi}{3} \\\ b=\frac{7\pi}{6} \\\ a+b=\frac{11\pi}{6}$ Hint: for the first one $\sin^2\left( \sin^{-1}\left( \frac{4}{x}\right)\right)= \left[\sin\left(\sin^{-1}\left( \frac{4}{x}\right)\right)\right]^2$
2016-02-10 07:31:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8345211148262024, "perplexity": 5931.837087426064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701158811.82/warc/CC-MAIN-20160205193918-00095-ip-10-236-182-209.ec2.internal.warc.gz"}
https://zenodo.org/record/346001/export/schemaorg_jsonld
Working paper Open Access # Some steps towards a theory of expressions Colignatus, Thomas ### JSON-LD (schema.org) Export { "description": "<p>The mathematical \"theory of expressions\" better be developed in a general fashion, so that it can be referred to in various applications. The paper discusses an example when there is a confusion between syntax (unevaluated) and semantics (evaluated), when substitution causes a contradiction. However, education should not wait till such a mathematical theory of expressions is fully developed. Computer algebra is sufficiently developed to support and clarify these issues. Fractions <em>y</em> / <em>x</em> can actually be abolised and replaced with <em>y</em> <em>x</em><sup><em>H</em></sup> with <em>H</em> = -1 as a constant like exponential number <em>e</em> or imaginary number <em>i</em>.</p>", "creator": [ { "affiliation": "Samuel van Houten Genootschap", "@type": "Person", "name": "Colignatus, Thomas" } ], "headline": "Some steps towards a theory of expressions", "citation": [ { "@id": "https://doi.org/10.5281/zenodo.291979", "@type": "CreativeWork" }, { "@id": "https://doi.org/10.5281/zenodo.292244", "@type": "CreativeWork" }, { "@id": "https://doi.org/10.5281/zenodo.292247", "@type": "CreativeWork" } ], "datePublished": "2017-03-06", "url": "https://zenodo.org/record/346001", "keywords": [ "Expression, fraction, syntax, semantics, substitution, numerator, denominator, H, computer algebra, Mathematica, mathematics education" ], "@context": "https://schema.org/", "identifier": "https://doi.org/10.5281/zenodo.346001", "@id": "https://doi.org/10.5281/zenodo.346001", "@type": "ScholarlyArticle", "name": "Some steps towards a theory of expressions" } 28 2 views
2019-12-14 02:02:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47599685192108154, "perplexity": 13750.190523526382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540579703.26/warc/CC-MAIN-20191214014220-20191214042220-00080.warc.gz"}
https://yaoyao.codes/tdd/2016/07/17/digest-of-working-effectively-with-legacy-code
# Digest of Working Effectively with Legacy Code Yao Yao on July 17, 2016 # Part I - The Mechanics of Change ## Chapter 1 - Changing Software Code behavior: • If we have to modify code (and HTML kind of counts as code), we could be changing behavior. • If we are only adding code and calling it, we are often adding behavior. • Adding a method doesn’t change behavior unless the method is called somehow. Four primary reasons to change software: 1. Adding a feature 2. Fixing a bug 3. Improving the design • The act of improving design without changing its behavior is called refactoring. 4. Optimizing resource usage • With both refactoring and optimization, we say, “We’re going to keep functionality exactly the same when we make changes, but we are going to change something else.” • In refactoring, the “something else” is program structure; we want to make it easier to maintain. • In optimization, the “something else” is some resource used by the program, usually time or memory. The difference between good systems and bad ones is that, • In the good ones, you feel pretty calm after you’ve done that learning, and you are confident in the change you are about to make. • In poorly structured code, the move from figuring things out to making changes feels like jumping off a cliff to avoid a tiger. You hesitate and hesitate. “Am I ready to do it? Well, I guess I have to.” ## Chapter 2 - Working with Feedback Changes in a system can be made in two primary ways. I like to call them • Edit and Pray and • Cover and Modify. Two types of testing: • “testing to attempt to show correctness” • “testing to detect change” • which serves as a software vise A unit test that takes 1/10th of a second to run is a slow unit test. A test is not a unit test if: 1. It talks to a database. 2. It communicates across a network. 3. It touches the file system. 4. You have to do special things to your environment (such as editing configuration files) to run it. Tests that do these things aren’t bad. Often they are worth writing, and you generally will write them in unit test harnesses. However, it is important to be able to separate them from true unit tests so that you can keep a set of tests that you can run fast whenever you make changes. Dependencies: • When classes depend directly on things that are hard to use in a test, they are hard to modify and hard to work with. • Much legacy code work involves breaking dependencies so that change can be easier. • The Legacy Code Dilemma: When we change code, we should have tests in place. To put tests in place, we often have to change code. • When you break dependencies in legacy code, you often have to suspend your sense of aesthetics a bit. Some dependencies break cleanly; others end up looking less than ideal from a design point of view. The Legacy Code Change Algorithm 1. Identify change points. • Chapter 16, I Don’t Understand the Code Well Enough to Change It. • Chapter 17, My Application Has No Structure. 2. Find test points. • Chapter 11, I Need to Make a Change. What Methods Should I Test? • Chapter 12, I Need to Make Many Changes in One Area. Do I Have to Break Dependencies for All the Classes Involved? 3. Break dependencies. • Chapter 9, I Can’t Get This Class into a Test Harness. • Chapter 10, I Can’t Run This Method in a Test Harness. • Chapter 22, I Need to Change a Monster Method and I Can’t Write Tests for It. • Chapter 23, How Do I Know That I’m Not Breaking Anything? 4. Write tests. • Chapter 13, I Need to Make a Change but I Don’t Know What Tests to Write. • Chapter 19, My Project is Not Object- Oriented. How Do I Make Safe Changes? 5. Make changes and refactor. • Chapter 8, How Do I Add a Feature? • Chapter 21, I’m Changing the Same Code All Over the Place. • Chapter 20, This Class Is Too Big and I Don’t Want It to Get Any Bigger. • Chapter 22, I Need to Change a Monster Method and I Can’t Write Tests for It. ## Chapter 3 - Sensing and Separation Generally, when we want to get tests in place, there are two reasons to break dependencies: sensing and separation. 1. Sensing: We break dependencies to sense when we can’t access values our code computes. • One dominant technique for sensing is Faking Collaborators. • A fake object is an object that impersonates some collaborator of your class when it is being tested. • Mock objects are fakes that perform assertions internally. • However, mock object frameworks are not available in all languages, and simple fake objects suffice in most situations. 2. Separation: We break dependencies to separate when we can’t even get a piece of code into a test harness to run. import junit.framework.*; public class SaleTest extends TestCase { public void testDisplayAnItem() { FakeDisplay display = new FakeDisplay(); Sale sale = new Sale(display); sale.scan("1"); assertEquals("Milk $3.99", display.getLastLine()); } public void testDisplayAnItem2() { MockDisplay display = new MockDisplay(); display.setExpectation("showLine", "Milk$3.99"); Sale sale = new Sale(display); sale.scan("1"); display.verify(); } } ## Chapter 4 - The Seam Model A seam is a place where you can alter behavior in your program without editing in that place. This seam is what I call an object seam. We were able to change the method that is called without changing the method that calls it. Object seams are available in object-oriented languages, and they are only one of many different kinds of seams.
2021-03-06 12:04:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.287551611661911, "perplexity": 1852.06444804221}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374686.69/warc/CC-MAIN-20210306100836-20210306130836-00023.warc.gz"}
https://www.physicsforums.com/threads/volume-translated-peng-robinson-equation-of-state.953429/
# Volume translated Peng-Robinson equation of state ## Main Question or Discussion Point Hi. Please excuse my ignorance but this entire volume translation formulas for EOS confuses me to no end. Could someone tell me how the volume-translated Peng-Robinson exactly works? How do I calculate the fugacity expression of VTPR? Do I integrate the V + c terms against dV or do i integrate it entirely with respect to d(V + c)? According to this paper (Tsai, J-C., Chen, Y-P.: Application of a volume-translated Peng-Robinson equation of state on vapor-liquid equilibrium calculations, 1997): https://ibb.co/fGeMhe If by comparison the fugacity equation is the same with the original PR EOS then I assume that they integrated the volume-translated equation with respect to d(V+c)? What I did to verify is to numerically integrate the volume-translated PREOS and the result did not equal the result of the integrated equation (did it in MATLAB, as seen here) https://ibb.co/k1Gv8K Related Materials and Chemical Engineering News on Phys.org dRic2 Gold Member What is ##t## ? Is it a sort of constant ? Ok, I Wikipedia(d) it and I understand it is a correction on the volume and it is a constant. So just substitute ##V' = V + t## so ##Z'## will be: ##Z' = \frac{PV'}{RT} = Z + T^*## and now you have a standard PR EoS. You can use the standard expression for ##log(φ)## with ##Z'## instead of ##Z## Last edited: What is ##t## ? Is it a sort of constant ? Ok, I Wikipedia(d) it and I understand it is a correction on the volume and it is a constant. So just substitute ##V' = V + t## so ##Z'## will be: ##Z' = \frac{PV'}{RT} = Z + T^*## and now you have a standard PR EoS. You can use the standard expression for ##log(φ)## with ##Z'## instead of ##Z## Sadly, I got even more confused. I hope you could guide me; maybe I could begin by understanding the meaning of V in the equation of state itself. Is it Vexp or VEOS? This is so confusing, sorry :( dRic2 Gold Member I'm not very familiar with this topic, I'll tell you what I figured it out by looking at my book on thermodynamics and some wikipedia. First, How do you calculate the fugacity coefficient ##φ##? As you did, you use the following formula (1): $$log(φ) = \int_0^P (Z-1) \frac {dP} P$$ with (2) $$Z = \frac {PV}{RT}$$ Standard PR EOS looks like this (3): $$P = \frac {RT}{V-b} - \frac {a α(T)} {V^2 + 2bV - b^2}$$ and we have an analytical solution for the integral of equation 1. My book gives the following solution(4): $$log(φ) = Z-1-log(Z-B) - \frac A {2\sqrt{2}B}log \frac {Z + B(1 + \sqrt{2})}{Z + B(1 - \sqrt{2})}$$ But, sadly, we are not working with a standard PR (equation 3) - instead we are working with VTPR EoS (https://en.wikipedia.org/wiki/VTPR) (5): $$P = \frac {RT}{V + c - b} - \frac {a α(T)} {(V+c)^2 + 2b(V+c) - b^2}$$ where ##c## is a constant. So, here is the trick: use the substitution ##\hat V = V + c## so you can re-write equation 5 like this (6): $$P = \frac {RT}{ \hat V - b} - \frac {a α(T)} {( \hat V)^2 + 2b(\hat V) - b^2}$$ Which is exactly like the standard PR EoS (equation 3)! This means you can use the result found above (4) Then equation 2 becomes (7): $$\hat Z = \frac {P \hat V} {RT} = \frac {P(V+c)}{RT} = \frac {PV} {RT} + \frac {Pc} {RT} = Z + T^*$$ Where I used the definition of ##T^* = \frac {Pc} {RT}## found in the paper you attached. This means that $$log(φ) = \hat Z-1-log( \hat Z-B) - \frac A {2\sqrt{2}B}log \frac {\hat Z + B(1 + \sqrt{2})}{\hat Z + B(1 - \sqrt{2})}$$ $$... = Z + T^*-1-log(Z + T^*-B) - \frac A {2\sqrt{2}B}log \frac {Z + T^* + B(1 + \sqrt{2})}{Z + T^* + B(1 - \sqrt{2})}$$ I'm not very familiar with this topic, I'll tell you what I figured it out by looking at my book on thermodynamics and some wikipedia. First, How do you calculate the fugacity coefficient ##φ##? As you did, you use the following formula (1): $$log(φ) = \int_0^P (Z-1) \frac {dP} P$$ with (2) $$Z = \frac {PV}{RT}$$ Standard PR EOS looks like this (3): $$P = \frac {RT}{V-b} - \frac {a α(T)} {V^2 + 2bV - b^2}$$ and we have an analytical solution for the integral of equation 1. My book gives the following solution(4): $$log(φ) = Z-1-log(Z-B) - \frac A {2\sqrt{2}B}log \frac {Z + B(1 + \sqrt{2})}{Z + B(1 - \sqrt{2})}$$ But, sadly, we are not working with a standard PR (equation 3) - instead we are working with VTPR EoS (https://en.wikipedia.org/wiki/VTPR) (5): $$P = \frac {RT}{V + c - b} - \frac {a α(T)} {(V+c)^2 + 2b(V+c) - b^2}$$ where ##c## is a constant. So, here is the trick: use the substitution ##\hat V = V + c## so you can re-write equation 5 like this (6): $$P = \frac {RT}{ \hat V - b} - \frac {a α(T)} {( \hat V)^2 + 2b(\hat V) - b^2}$$ Which is exactly like the standard PR EoS (equation 3)! This means you can use the result found above (4) Then equation 2 becomes (7): $$\hat Z = \frac {P \hat V} {RT} = \frac {P(V+c)}{RT} = \frac {PV} {RT} + \frac {Pc} {RT} = Z + T^*$$ Where I used the definition of ##T^* = \frac {Pc} {RT}## found in the paper you attached. This means that $$log(φ) = \hat Z-1-log( \hat Z-B) - \frac A {2\sqrt{2}B}log \frac {\hat Z + B(1 + \sqrt{2})}{\hat Z + B(1 - \sqrt{2})}$$ $$... = Z + T^*-1-log(Z + T^*-B) - \frac A {2\sqrt{2}B}log \frac {Z + T^* + B(1 + \sqrt{2})}{Z + T^* + B(1 - \sqrt{2})}$$ Wow, thanks. Actually I tried going back to the (Z-1)/P dP integral and worked from there. You killed off a lot of doubts. Apparently the paper derives the equations very poorly, and the paper interchanged VEXP and VEOS which confused me even more. Thank you!
2020-06-02 09:30:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9281869530677795, "perplexity": 1172.0372581651884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347423915.42/warc/CC-MAIN-20200602064854-20200602094854-00423.warc.gz"}
http://experiment-ufa.ru/-4(7-x)(7+x)=4x(x-5)
-4(7-x)(7+x)=4x(x-5) Simple and best practice solution for -4(7-x)(7+x)=4x(x-5) equation. Check how easy it is, and learn it for the future. Our solution is simple, and easy to understand, so dont hesitate to use it as a solution of your homework. If it's not what You are looking for type in the equation solver your own equation and let us solve it. Solution for -4(7-x)(7+x)=4x(x-5) equation: Simplifying -4(7 + -1x)(7 + x) = 4x(x + -5) Multiply (7 + -1x) * (7 + x) -4(7(7 + x) + -1x * (7 + x)) = 4x(x + -5) -4((7 * 7 + x * 7) + -1x * (7 + x)) = 4x(x + -5) -4((49 + 7x) + -1x * (7 + x)) = 4x(x + -5) -4(49 + 7x + (7 * -1x + x * -1x)) = 4x(x + -5) -4(49 + 7x + (-7x + -1x2)) = 4x(x + -5) Combine like terms: 7x + -7x = 0 -4(49 + 0 + -1x2) = 4x(x + -5) -4(49 + -1x2) = 4x(x + -5) (49 * -4 + -1x2 * -4) = 4x(x + -5) (-196 + 4x2) = 4x(x + -5) Reorder the terms: -196 + 4x2 = 4x(-5 + x) -196 + 4x2 = (-5 * 4x + x * 4x) -196 + 4x2 = (-20x + 4x2) Add '-4x2' to each side of the equation. -196 + 4x2 + -4x2 = -20x + 4x2 + -4x2 Combine like terms: 4x2 + -4x2 = 0 -196 + 0 = -20x + 4x2 + -4x2 -196 = -20x + 4x2 + -4x2 Combine like terms: 4x2 + -4x2 = 0 -196 = -20x + 0 -196 = -20x Solving -196 = -20x Solving for variable 'x'. Move all terms containing x to the left, all other terms to the right. Add '20x' to each side of the equation. -196 + 20x = -20x + 20x Combine like terms: -20x + 20x = 0 -196 + 20x = 0 Add '196' to each side of the equation. -196 + 196 + 20x = 0 + 196 Combine like terms: -196 + 196 = 0 0 + 20x = 0 + 196 20x = 0 + 196 Combine like terms: 0 + 196 = 196 20x = 196 Divide each side by '20'. x = 9.8 Simplifying x = 9.8`
2017-11-19 10:28:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25587621331214905, "perplexity": 3028.560840386327}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805541.30/warc/CC-MAIN-20171119095916-20171119115916-00772.warc.gz"}
https://socratic.org/questions/what-is-the-difference-between-branched-and-unbranched-alkanes
# What is the difference between branched and unbranched alkanes? Jan 18, 2016 Any alkane that has a carbon atom adjacent to 3 or 4 other carbon atoms, is considered a branched alkane. Any alkane that has all the carbon atoms adjacent only to 1 or 2 carbon atoms is an unbranched alkane. #### Explanation: Example of branched butane: "HC"("CH"_3)_3 unbranched butane: ${\text{H"_3"C"("CH"_2)_2"CH}}_{3}$ Due to the difference in chemical structure, you would expect some chemical properties, such as boiling point, of branched and unbranched alkanes to differ.
2022-07-07 07:38:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32529357075691223, "perplexity": 2797.780608914186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683708.93/warc/CC-MAIN-20220707063442-20220707093442-00670.warc.gz"}
https://openstax.org/books/university-physics-volume-3/pages/11-problems
University Physics Volume 3 # Problems ### 11.1Introduction to Particle Physics 31. How much energy is released when an electron and a positron at rest annihilate each other? (For particle masses, see Table 11.1.) 32. If $1.0×1030MeV1.0×1030MeV$ of energy is released in the annihilation of a sphere of matter and antimatter, and the spheres are equal mass, what are the masses of the spheres? 33. When both an electron and a positron are at rest, they can annihilate each other according to the reaction $e − + e + → γ + γ . e − + e + → γ + γ .$ In this case, what are the energy, momentum, and frequency of each photon? 34. What is the total kinetic energy carried away by the particles of the following decays? $(a)π0→γ+γ(b)K0→π++π−(c)Σ+→n+π+(d)Σ0→Λ0+γ.(a)π0→γ+γ(b)K0→π++π−(c)Σ+→n+π+(d)Σ0→Λ0+γ.$ ### 11.2Particle Conservation Laws 35. Which of the following decays cannot occur because the law of conservation of lepton number is violated? $( a ) n → p + e − ( e ) π − → e − + υ − e ( b ) μ + → e + + υ e ( f ) μ − → e − + υ − e + υ μ ( c ) π + → e + + υ e + υ − μ ( g ) Λ 0 → π − + p ( d ) p → n + e + + υ e ( h ) K + → μ + + υ μ ( a ) n → p + e − ( e ) π − → e − + υ − e ( b ) μ + → e + + υ e ( f ) μ − → e − + υ − e + υ μ ( c ) π + → e + + υ e + υ − μ ( g ) Λ 0 → π − + p ( d ) p → n + e + + υ e ( h ) K + → μ + + υ μ$ 36. Which of the following reactions cannot because the law of conservation of strangeness is violated? $( a ) p + n → p + p + π − ( e ) K − + p → Ξ 0 + K + + π − ( b ) p + n → p + p + K − ( f ) K − + p → Ξ 0 + π − + π − ( c ) K − + p → K − + ∑ + ( g ) π + + p → Σ + + K + ( d ) π − + p → K + + ∑ − ( h ) π − + n → K − + Λ 0 ( a ) p + n → p + p + π − ( e ) K − + p → Ξ 0 + K + + π − ( b ) p + n → p + p + K − ( f ) K − + p → Ξ 0 + π − + π − ( c ) K − + p → K − + ∑ + ( g ) π + + p → Σ + + K + ( d ) π − + p → K + + ∑ − ( h ) π − + n → K − + Λ 0$ 37. Identify one possible decay for each of the following antiparticles: (a) $n−n−$, (b) $Λ0—Λ0—$, (c) $Ω+Ω+$, (d) $K−K−$, and (e) $Σ−Σ−$. 38. Each of the following strong nuclear reactions is forbidden. Identify a conservation law that is violated for each one. $(a) p + p − → p + n + p − (b) p + n → p + p − + n + π + (c) π − + p → Σ + + K − (d) K − + p → Λ 0 + n (a) p + p − → p + n + p − (b) p + n → p + p − + n + π + (c) π − + p → Σ + + K − (d) K − + p → Λ 0 + n$ ### 11.3Quarks 39. Based on quark composition of a proton, show that its charge is $+1+1$. 40. Based on the quark composition of a neutron, show that is charge is 0. 41. Argue that the quark composition given in Table 11.5 for the positive kaon is consistent with the known charge, spin, and strangeness of this baryon. 42. Mesons are formed from the following combinations of quarks (subscripts indicate color and $AR=antiredAR=antired$): $(dR,d−AR)(dR,d−AR)$, ($sG,u−AGsG,u−AG$), and ($sR,s−ARsR,s−AR$). (a) Determine the charge and strangeness of each combination. (b) Identify one or more mesons formed by each quark-antiquark combination. 43. Why can’t either set of quarks shown below form a hadron? 44. Experimental results indicate an isolate particle with charge $+2/3+2/3$—an isolated quark. What quark could this be? Why would this discovery be important? 45. Express the $ββ$ decays $n→p+e−+ν−n→p+e−+ν−$ and $p→n+e++νp→n+e++ν$ in terms of $ββ$ decays of quarks. Check to see that the conservation laws for charge, lepton number, and baryon number are satisfied by the quark $ββ$ decays. ### 11.4Particle Accelerators and Detectors 46. A charged particle in a 2.0-T magnetic field is bent in a circle of radius 75 cm. What is the momentum of the particle? 47. A proton track passes through a magnetic field with radius of 50 cm. The magnetic field strength is 1.5 T. What is the total energy of the proton? 48. Derive the equation $p=0.3Brp=0.3Br$ using the concepts of centripetal acceleration (Motion in Two and Three Dimensions) and relativistic momentum (Relativity) 49. Assume that beam energy of an electron-positron collider is approximately 4.73 GeV. What is the total mass (W) of a particle produced in the annihilation of an electron and positron in this collider? What meson might be produced? 50. At full energy, protons in the 2.00-km-diameter Fermilab synchrotron travel at nearly the speed of light, since their energy is about 1000 times their rest mass energy. (a) How long does it take for a proton to complete one trip around? (b) How many times per second will it pass through the target area? 51. Suppose a $W−W−$ created in a particle detector lives for $5.00×10−25s5.00×10−25s$. What distance does it move in this time if it is traveling at 0.900c? (Note that the time is longer than the given $W−W−$ lifetime, which can be due to the statistical nature of decay or time dilation.) 52. What length track does a $π+π+$ traveling at 0.100c leave in a bubble chamber if it is created there and lives for $2.60×10−8s2.60×10−8s$? (Those moving faster or living longer may escape the detector before decaying.) 53. The 3.20-km-long SLAC produces a beam of 50.0-GeV electrons. If there are 15,000 accelerating tubes, what average voltage must be across the gaps between them to achieve this energy? ### 11.5The Standard Model 54. Using the Heisenberg uncertainly principle, determine the range of the weak force if this force is produced by the exchange of a Z boson. 55. Use the Heisenberg uncertainly principle to estimate the range of a weak nuclear decay involving a graviton. 56. (a) The following decay is mediated by the electroweak force: $p → n + e + + υ e . p → n + e + + υ e .$ Draw the Feynman diagram for the decay. (b) The following scattering is mediated by the electroweak force: $υ e + e − → υ e + e − . υ e + e − → υ e + e − .$ Draw the Feynman diagram for the scattering. 57. Assuming conservation of momentum, what is the energy of each $γγ$ ray produced in the decay of a neutral pion at rest, in the reaction $π0→γ+γπ0→γ+γ$ ? 58. What is the wavelength of a 50-GeV electron, which is produced at SLAC? This provides an idea of the limit to the detail it can probe. 59. The primary decay mode for the negative pion is $π−→μ−+ν−μ.π−→μ−+ν−μ.$ (a) What is the energy release in MeV in this decay? (b) Using conservation of momentum, how much energy does each of the decay products receive, given the $π−π−$ is at rest when it decays? You may assume the muon antineutrino is massless and has momentum $p=E/cp=E/c$, just like a photon. 60. Suppose you are designing a proton decay experiment and you can detect 50 percent of the proton decays in a tank of water. (a) How many kilograms of water would you need to see one decay per month, assuming a lifetime of $1031y?1031y?$ (b) How many cubic meters of water is this? (c) If the actual lifetime is $1033y1033y$, how long would you have to wait on an average to see a single proton decay? ### 11.6The Big Bang 61. If the speed of a distant galaxy is 0.99c, what is the distance of the galaxy from an Earth-bound observer? 62. The distance of a galaxy from our solar system is 10 Mpc. (a) What is the recessional velocity of the galaxy? (b) By what fraction is the starlight from this galaxy redshifted (that is, what is its z value)? 63. If a galaxy is 153 Mpc away from us, how fast do we expect it to be moving and in what direction? 64. On average, how far away are galaxies that are moving away from us at $2.0%2.0%$ of the speed of light? 65. Our solar system orbits the center of the Milky Way Galaxy. Assuming a circular orbit 30,000 ly in radius and an orbital speed of 250 km/s, how many years does it take for one revolution? Note that this is approximate, assuming constant speed and circular orbit, but it is representative of the time for our system and local stars to make one revolution around the galaxy. 66. (a) What is the approximate velocity relative to us of a galaxy near the edge of the known universe, some 10 Gly away? (b) What fraction of the speed of light is this? Note that we have observed galaxies moving away from us at greater than 0.9c. 67. (a) Calculate the approximate age of the universe from the average value of the Hubble constant, $H0=20km/s·MlyH0=20km/s·Mly$. To do this, calculate the time it would take to travel 0.307 Mpc at a constant expansion rate of 20 km/s. (b) If somehow acceleration occurs, would the actual age of the universe be greater or less than that found here? Explain. 68. The Andromeda Galaxy is the closest large galaxy and is visible to the naked eye. Estimate its brightness relative to the Sun, assuming it has luminosity $10121012$ times that of the Sun and lies 0.613 Mpc away. 69. Show that the velocity of a star orbiting its galaxy in a circular orbit is inversely proportional to the square root of its orbital radius, assuming the mass of the stars inside its orbit acts like a single mass at the center of the galaxy. You may use an equation from a previous chapter to support your conclusion, but you must justify its use and define all terms used. Do you know how you learn best? Order a print copy As an Amazon Associate we earn from qualifying purchases.
2022-06-26 01:37:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 40, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7377465963363647, "perplexity": 490.98893929823953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036363.5/warc/CC-MAIN-20220626010644-20220626040644-00064.warc.gz"}
http://math.stackexchange.com/questions/283683/direct-method-coercive-and-wlsc-proof
# Direct Method : coercive and wlsc - proof Consider $F\colon M\subseteq X\to [-\infty,\infty], M\neq\emptyset$. Then $\min\limits_{u\in M}F(u)=\alpha$ has a solution, if 1.) $X$ is reflexive. 2.) $F$ is coercive. 3.) $F$ is weak low semi continuous. Let $f$ be defined by $f(x,u,\xi)=u^2+(\xi^2-1)^2$. The Bolza problem is defined by $$\inf\left\{F(u)=\int\limits_0^1 ((1-u'^2)^2+u^2)\, dx; u\in W^{1,4}(0,1); u(0)=0=u(1)\right\}.$$ a) Check, if $X$ is reflexive. b) Check, if $F$ is coercive. c) Check, if $F$ is weak low semi continuous. Tip: You should use the Young equation $2ab\leq \epsilon a^2+\frac{b^2}{\epsilon}~\forall~a,b,\epsilon >0$. Hello, i have some problems with this tasks. a) I guess here is the answer YES, it is reflexive, because $W^{1,4}(0,1)$ is a Hilbert space. b) and c) is difficult to me. b) The professor gave the advice to find a sequence $(u_n)$with $F(u_n)\to 0$. So I guess I have to find such a sequence with $\lVert u_n\rVert_{W^{1,4}}\to\infty$, but $F(u_n)\to 0$ what would show, that $F$ is NOT coercive. But how to find such a sequence, can you help me? By the way: How is $\lVert \cdot \rVert_{W^{1,4}}$ defined? Is it $\lVert\cdot\rVert_{W^{1,4}}=\lVert\cdot\rVert_{L^2}+\lVert\cdot\rVert_{L^4}$? - a) $W^{1,4}_0(0,1)$ is not a Hilbert space. But it is reflexive. Probably the easiest way to see this is to use the map $F:W^{1,4}_0(0,1)\to L^4(0,1)$ defined by $F(u)=u'$. This map is an isomorphism onto its image (a closed subspace of $L^4$). Since the space $L^4$ is reflexive, so are its subspaces. b) The advice you quoted pertains to c) not to b). The functional $F$ is coercive. You can prove coercitivity using an algebraic estimate of the form $u^2+(\xi-1)^2\ge \frac12 \xi^4-A$ where $A$ is some constant. c) Was already answered: the sequence $u_n(x)=\int_0^x \operatorname{sign}\sin 2\pi n t\,dt$ converges to $0$ weakly, but $F(u_n)\to 0$ while $F(0)=1$. Hence, $F$ is not weakly lower semicontinuous. a) Why is the image closed? c) Why does $u_n$ converge WEAKLY to 0? –  math12 Jan 21 '13 at 21:23 @math12 a) because $\|F(u)\|_{L^4}\ge c \|u\|_{W^{1,4}}$ by the Poincaré inequality. c) clearly $u_n$ does not converge to zero strongly since $\int |u'_n|^4=1$ for all $n$. Weak convergence can be verified directly, using integration by parts and Riemann-Lebesgue lemma. An easier way is to argue that since $u_n$ is bounded, it has a weakly convergent subsequence, and this subsequence satisfies the requirements of c). –  user53153 Jan 21 '13 at 21:37 I do not see how to show that $F$ is coercive. Suppose that $\lVert u_n\rVert_{W^{1,4}}=\lVert u_n\rVert_{L^4}+\lVert u_n'\rVert_{L^4}\to\infty$. Why then $F(u_n)=\int\limits_0^1 1\, dx-2\int\limits_0^1 u_n'(x)^2\, dx+\int\limits_0^1 u_n'(x)^4\, dx+\int\limits_0^1 u_n(x)^2\, dx\to\infty$? –  math12 Jan 22 '13 at 11:11
2014-08-30 22:16:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.978490948677063, "perplexity": 143.95776511693796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835822.36/warc/CC-MAIN-20140820021355-00016-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.ias.ac.in/listing/bibliography/pram/S_RADHA
Articles written in Pramana – Journal of Physics • Metamagnetism in Ce(Ga,Al)2 Effect of Al substitution on the magnetic properties of Ce(Ga1−xAlx)2 (x=0, 0.1 and 0.5) system has been studied. The magnetic state of CeGa2 is found to be FM with a TC of 8 K, whereas the compounds with x=0.1 and 0.5 are AFM and possess TN of about 9 K. These two compounds undergo metamagnetic transition and the critical fields are about 1.2 T and 0.5 T. respectively at 2 K. These variations are explained on the basis of helical spin structure in these compounds. • Dynamic frequency analysis of stress–strain-dependent reversibly deformable broadband RF antenna over unevenly made elastomeric substrate This paper presents the design and development of stretchable and deformable broadband antenna using low-cost silicone rubber-based dielectric substrate. A slot is introduced in the substrate to improve the stretchable behaviour. The dielectric properties of the substrate are measured using suspended ring resonator method. The proposed antenna uses silver elastomeric Lycra fabric as the conductive medium. The resistivity of the conducting Ag fabric is 1 $\Omega$ per sq. mm. The conducting patch and ground plane are attached with the substrate using silicone-based adhesive. The fabricated antenna is tested for its resonant characteristics using the vector network analyser. • Terahertz broadband metamaterial absorber enabled by SiO$_{2}$, polyimide and PET dielectric substrates A broadband polarisation-insensitive terahertz (THz) metamaterial absorber (MMA) is presented in this paper. The MMA consists of a simple planar structure as a unit cell and an optically transparent indium tin oxide (ITO) ground plane, both are separated by a 50 $\mu$m dielectric substrate. We designed three combinations of MMA here, which are ITO–polyimide–ITO, ITO–polyethylene terephthalate (PET)–ITO and ITO–silicon dioxide (SiO$_{2}$)–ITO for the same planar structure. By changing the substrate of the structure, the resonant frequency and bandwidth of the absorber structure can be varied. The numerical simulation of the absorber shows that the absorptivity is $>$ 96% for all three substrates. Polyimide, PET and SiO$_{2}$ based absorbers demonstrated the bandwidth of 0.558 THz,0.603 THz and 0.658 THz with covered broadband frequency range of 0.4254–0.9829 THz, 0.457–1.16 THz and 0.511–1.169 THz respectively. ITO–PET–ITO absorber structure produced optical transparency. These bandwidths are compatible and convenient for electronic sources in the terahertz region. This study also provides applications in THz sensing and imaging, communication and detection systems. • Terahertz single dual multi-band metamaterial absorber In this study, a new multifunctional terahertz (THz) metamaterial absorber (MMA) has been presented which is controlled by the thickness of the substrate used. The proposed structure consists of copper as the ground plane and polyimide dielectric layer is placed in between the ground panel and the top radiating patch. The resonant frequency and number of resonating modes of the proposed absorber can be changed by varying the thickness of thesubstrate from 10 to 100 μm for the same planar structure. Depending on the thickness of the substrate, this MMA gives a narrow (10 μm), double (20 μm), triple (30 μm), quad (50 μm) and hexa (100 μm) number of resonating modes. In order to analyse the physical mechanism of the proposed absorber, we took 10, 20 and 30 μm-based MMA and their electric and magnetic fielddistributions are demonstrated. We compared the resonant frequency ranges and the number of bands with the previously reported papers. The polarisation and angle insensitivity of the design have been validated by numerical simulation up to 90$^◦$ of oblique incidence. The effects of variationin geometrical parameters and sensing habits have been studied in the narrow band (10 μm) MMA structure. The designed multifunctional absorber has the advantage of using the same MMA to produce multiple (narrow, double,triple, quad and hexa) band absorbers. • A miniaturised FSS with band-stop response for shielding application in X-band frequency A novel miniaturised single-layer square loop frequency-selective surface (FSS) designed and optimised for effective shielding in X-band is presented. The proposed FSS structure provides a wider bandwidth response of 4.0 GHz and it covers the X-band frequency (8–12 GHz) with a centre frequency of 10 GHz and attenuation of 39 dB. The proposed design compared the simulation results with equivalent circuit method results. Both achieved the same frequency response and bandwidth but the attenuation level is varied. Due to the symmetric structure,the proposed design achieves polarisation-independent characteristics and provides stable response at normal and oblique incidences for both TE and TM modes up to 60º. Both the measured and simulated results are in good agreement. • Polarisation-insensitive and broadband band-stop metamaterial filter for THz waves This paper describes the design and analysis of the tri-band metamaterial band-stop filter for terahertz applications. The design consists of a structured gold metallic patch over a flexible polyimide substrate. The thicknesses of the substrate and the metallic patch are 125 and 1 μm, respectively. The simulation results reveal that the designed structure resonates at three frequencies: f$_1$ = 0.6 THz, f$_2$ = 1.4 THz and f$_3$ = 2.4 THz. The proposed structure has polarisation-insensitive and angle-resolved transmission characteristics. The structure has 200 GHz and 800 GHz bandwidths at f$_1$ and f$_2$. This proves that the proposed design will be useful for broadband terahertz applications. The multiband resonances have been confirmed and analysed using the field and surfacecurrent distributions.These multiband resonances were due to the combination of electric dipolar, quad polar and magnetic dipolar resonance behaviour of the patterned metallic structure. • # Pramana – Journal of Physics Volume 97, 2023 All articles Continuous Article Publishing mode • # Editorial Note on Continuous Article Publication Posted on July 25, 2019
2023-02-08 19:46:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4628230929374695, "perplexity": 3679.3651612354906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500904.44/warc/CC-MAIN-20230208191211-20230208221211-00747.warc.gz"}
https://ncertmcq.com/cbse-sample-papers-for-class-9-maths-paper-1/
CBSE Sample Papers for Class 9 Maths Paper 1 is part of CBSE Sample Papers for Class 9 Maths . Here we have given CBSE Sample Papers for Class 9 Maths Paper 1 ## CBSE Sample Papers for Class 9 Maths Paper 1 Board CBSE Class IX Subject Maths Sample Paper Set Paper 1 Category CBSE Sample Papers Students who are going to appear for CBSE Class 9 Examinations are advised to practice the CBSE sample papers given here which is designed as per the latest Syllabus and marking scheme as prescribed by the CBSE is given here. Paper 1 of Solved CBSE Sample Papers for Class 9 Maths is given below with free PDF download solutions. Time: 3 Hours Maximum Marks: 80 General Instructions: • All questions are compulsory. • Questions 1-6 in Section-A are Very Short Answer Type Questions carrying 1 mark each. • Questions 7-12 in Section-B are Short Answer (SA-I) Type Questions carrying 2 marks each. • Questions 13-22 in Section-C are Short Answer (SA-II) Type Questions carrying 3 marks each. • Questions 23 -30 in Section-D are Long Answer Type Questions carrying 4 marks each. SECTION-A Question 1. Question 2. What is the degree of zero polynomial? Question 3. In ∆ABC, BC is produced to D. If ∠ABC = 40° and ∠ACD = 120°, then ∠A = ? Question 4. Write the signs of abscissa and ordinate of a point in quadrant (II). Question 5. The total surface area of a cube is 216 cm². Find its volume. Question 6. Find the slope of the line 2x + 3y – 4 = 0. SECTION-B Question 7. If a + b + c = 0, then a³ + b³ + c³ = ? Question 8. Find four different solutions of 2x + y = 6 Question 9. The base of an isosceles triangle is 16 cm and its area is 48 cm². Find the perimeter of a triangle. Question 10. If the mean of five observations x, x + 2, x + 4, x + 6 and x + 8 is 13. Find the value of x. Question 11. 1500 families with 2 children each were selected randomly and the following data were recorded. Out of these families, one family is selected at random. What is the probability that the selected family has (i) 2 girls (ii) no girl. Question 12. Two coins are tossed 1000 times and the outcomes are recorded as under. A coin is thrown at random. What is the probability of getting (i) at most one head ? SECTION-C Question 13. If x = 3 + √8 , find the value of $$\left( { x }^{ 2 }+\frac { 1 }{ { x }^{ 2 } } \right)$$ Question 14. If p = 2 – a, prove that a³ + 6ap + p³ – 8 = 0. Question 15. In the given figure, prove that x = α + β + γ. Question 16. If the bisector of the vertical angle of a triangle bisect the base, prove that the triangle is Isosceles. Question 17. The sides BA and DC of quad. ABCD are produced as shown in the given figure. Prove that x° + y° = a° + b°. Question 18. In an equilateral triangle, prove that the centroid and the circumcentre coincide. Question 19. Construct a Δ ABC whose perimeter is 14 cm and sides are in the ratio 2:3:4. Question 20. Plot the points A(-5, 2), B(-4, -3), C(3, -2) and D(6, 0) on a graph paper. Write the name of figure formed by joining them. Question 21. The diameter of a roller, 120 cm long is 84 cm. If it takes 500 complete revolutions to level a playground. Find the cost of levelling it at 75 paise per square metre. Question 22. The internal and external diameters of a hollow hemispherical vessel are 20 cm and 28 cm respectively. Find the cost of painting the vessel all over at 14 paise per cm². SECTION-D Question 23. Show that Question 24. Prove that x3 + y3 + z3 – 3xyz =(x + y + z) (x2 + y2 + z2 – xy – yz – zx). Hence find the value of x3 + y3 + z3 if x + y + z = 0. Question 25. In a ∆ABC, the side AB and AC are produced to P and Q respectively. The bisector of ∠PBC and ∠QCB intersect at a point O. Prove that ∠BOC = 90° – $$\frac { 1 }{ 2 }$$ ∠A. Question 26. In each of the following figures, AB || CD. Find the value of x in each case. Question 27. Draw the graph of the equation 3x + 2y = 12. At what points does the graph cut the x-axis and the y-axis. Question 28. A river 2m deep and 45 m wide is flowing at the rate of 3 km per hour. Find the volume of water that runs into the sea per minute. 1. Why we should preserve and not waste the water and other natural resources? Write one method to restore the water at your home. 2. Why it is necessary to clean all rivers in the country? Question 29. The mean of 25 observations is 36. If the mean of first 13 observations is 32 and that of last 13 observations is 39, find the 13th observation. Question 30. Plot the points A(2, 5), B(-2, 2) and C(4, 2) on a graph paper. Join AB, BC and AC. Calculate the area of ∆ABC. Solutions Solution 1. Solution 2. Degree of zero polynomial is not defined. Solution 3. ∠A + ∠B = ∠ACD ⇒ ∠A + 40° = 120° ⇒ ∠ A = 80° Solution 4. The points in quadrant II are in the form (-, +). Solution 5. The Total Surface Area of cube (T.S.A.) = 6a² = 216 => a² = 36 => a = √36 = 6 cm Volume of cube = (a)3 = (6)3 = 216 cm3 Volume of cube = 216 cm3 Solution 6. 2x + 3y – 4 = 0 ⇒ 3y = – 2x + 4 Solution 7. a + b + c = 0 ⇒ a + b = – c ⇒ (a + b)3 = (- c)3 ⇒ a3 + b3 + 3ab (a + b) = -c3 ⇒ a3 + b3 + 3ab (- c) = -c3 ⇒ a3 + b3 + c3 = 3abc Solution 8. y = 6 – 2x (i) If x = 1,y = 6 – 2 = 4 (ii) If x = 2,y = 6 – 4 = 2 (iii) If x = 3,y = 6 – 6 = 0 (iv) If x = 4,y = 6 – 8 = -2 Solution 9. b =16 cm, area = 48 cm² Perimeter of Isosceles triangle = 2a + b = 20 + 16 = 36 cm. Solution 10. Mean of the given observations Solution 11. Total number of families = 1500 Solution 12. Total number of tosses = 266 + 540 + 194 = 1000 = 540+ 194 = 734 P(E1) = P(at most one head) = $$\frac { 734 }{ 1000 }$$ =0.734 = 540 + 266 = 806 P(E2) = P(atleast one head) = $$\frac { 806 }{ 1000 }$$ = 0.806 Solution 13. x + 3√8 Solution 14. We have p = 2 – a => a + p + (-2) = 0 => Putting a = x,p = y and (-2) = z. we get a + p + (-2) = 0 =>x + y + z = 0 ⇒a3 + p3 + (-2)3 = 3 × a × p × (-2) [If x + y + z = 0 then x3 + y3 + z3 = 3xyz] ⇒a3 + p3 + (-2)3 = -6ap ⇒a3 + 6ap + p3 – 8 = 0 Solution 15. Join B and D and produce BD to E. p + q = β and s + t = x Side BD of A ABD is produced to E ∴ p + α = s ..(1) Side BD of A CBD is produced to E. ∴ q + γ = t ..(2) (p + α) + (q + γ) = s + 1 (p + q) + (α) + (γ) = x β + α + γ = x ⇒ x = α + β + γ Solution 16. Given: ∆ AABC in which AD is the bisector of ∠ A, which meets BC in D such that BD = DC. To prove: AB = AC Construction: Produce AD to E such that AD = DE and join EC. Proof: In ∆ABD and ∆ECD BD = DC (given) => ∴ ∆ABD ≅ ∆ECD ∴ ∆ABC ≅ ∆ECD => AB = EC and ∠1 = ∠3 …(2) (CPCT) Also ∠1 = ∠2 …(3) [∵AD bisects ∠A] ∴ ∠2 = ∠3 [using (2)and(3)] => EC = AC …(4) [side opposite to equal angles] => AB = AC [using (1)and(4)] => ∆ABC is Isosceles. Solution 17. We have ∠ A + b° = 180° (Linear pair) => ∠A = 180° – b° …(1) Also ∠C + a° = 180° (Linear pair) ∠C = 180° – a° …(2) Solution 18. Let ∆ ABC be the given equilateral triangle and let its median AD, BE and CF intersect at G. Then G is the centroid of ∆ ABC In ∆ BCE and ∆ CBF BC = CB (common) ∠B = ∠C [each 60°] This shows that G is the circumcentre of ∆ ABC => G is the centroid as well as circumcentre of ∆ ABC Note: Centroid: The point of intersection of the medians of a triangle is called centroid. The centroid of the triangle is the point located at $$\frac { 2 }{ 3 }$$ of the distance from a vertex along the median. Circumcentre: The centre of the circumcircle of the triangle is called the circumcentre. Solution 19. Steps of construction: (i) Draw a line segment PQ =14 cm. Draw a ray PX making an acute angle with PQ and draw in the downward direction. (ii) From P, mark set of (2 + 3 + 4) = 9 equal distance along PX. (iii) Mark points L, M, N on PX such that PL = 2 cm, LM = 3 cm and MN = 4 cm. (iv) Join NQ. Through L and M draw LB || NQ and MC || NQ. Cutting PQ at B and C respectively. (v) With B as centre and radius BP draw an arc. With C as centre and radius CQ draw another arc, cutting the previous arc at A. (vi) Join AB and AC. ∆ ABC is the required triangle. Solution 20. Points A(-5, 2), B(-4, -3), C (3, -2) and D(6, 0) are shown on graph paper. The figure formed by joining them is called quadrilateral. Solution 21. Radius of roller = r = 42 cm, Length = h = 120 cm Area covered by the roller in 1 revolution = C.S.A. of the roller = 2πrh sq. unit. Solution 22. Outer radius of the vessel R = 14 cm Inner radius of the vessel = r = 10 cm Area of the outer surface = 2πR² cm² = (2π x 14 x 14) cm² = 392 π cm² Area of the inner surface = 2πt² cm² = (2π x 10 x 10) cm² = 200 π cm² Area of the ring (shaded) at the top = π(R² – r²) = π[( 14)² – (10)²] = π(14 + 10) (14 – 10) = 96 π cm² Total area to be painted = 392 π + 200 π + 96 π = 688 π cm² Solution 23. Solution 24. x3 + y3 + z3 – 3xyz = (x3 + y3) + z3 – 3xyz x3 + y3 + z3 – 3xyz = [(x + y)3 – 3xy(x + y)] + z3 – 3xyz [let x + y = u] = u3 – 3xyu + z3 – 3xyz = u3 + z3 – 3xy (u + z) = (u3 + z3) – 3xy (u + z) = (u + z) (u2 – uz + z2) – 3xy (u + z) = (u + z) [u2 + z2 – uz – 3xy] = (x + y + z) [(x + y)2 + z2 – (x + y)z – 3xy] [u = x + y] = (x + y + z) [x2 + y2 + z2 + 2xy – xz -yz – 3xy] x3 + y3 + z3 – 3xyz = (x + y + z) (x2 + y2 + z2 – xy – yz – zx) if x + y + z = 0 x3 + y3 + z3 – 3xyz = 0 × (x2 + y2 + z2 – xy – yz – zx) x3 + y3 + z3 – 3xyz = 0 x3 + y3 + z3 = 3xyz Solution 25. We have ∠B + ∠CBP = 180° (Linear pair) Solution 26. (i) Draw EF || AB || CD => ∠1 + ∠2 = x° Now AB || EF and AE is transversal. ∠1 + ∠BAE = ∠1 + 104° = 180° (co interior angles) ∠1 = 180°- 104° = 76° Again EF || CD and EC is the transversal ∠2 + ∠ECD = ∠2 + 116° = 180° => ∠2 = 64° Hence x = ∠1 + ∠2 = 76° + 64° = 140° => x = 140° (ii) Draw EO || AB || CD x° = ∠1 + ∠2 Now EO || AB and OB is transversal ∠1 + ∠ABO = 180° [Co-interior angles] ∠1 + 40°= 180° => ∠1 = 180°- 40° = 140° => ∠1 = 140° Again EO || CD and DO is transversal ∠2 + ∠CDO = 180°[Co-interior angles] ∠2 + 35° = 180° => ∠2 = 180° – 35° = 145° => ∠2 = 145° ∠1 + ∠2 = (140° + 145°) = 285° => x = ∠1 + ∠2 = 285° => x = 285° Solution 27. Linear equation 3x + 2y = 12 The point A(4, 0) cuts on the x-axis and point B(0, 6) cuts on the y-axis. Solution 28. Volume of the water running into the sea per minute = Volume of cuboid = l x bx h = 50 x 45 x 2 = 4500 m3 (i) We should preserve the water and other natural resources for ourself and for our next generation because they can not be polluted or finished very fast. (ii) Method of restoring fresh water is by ‘rain water harvesting’ in home. (iii) Our rivers are very polluted and causing the problems in ecosystem and becoming the shortage of fresh water at large scale. So, it is necessary to clean all rivers of our country. Solution 29. Mean of first 13 observations = 32 Sum of the first 13 observations = 32 x 13 = 416 Mean of last 13 observations =39 Sum of last 13 observations = 39 x 13 = 507 Mean of 25 observations = 36 Sum of all the 25 observations = 36 x 25 = 900 ∴ the 13th observation = (416 + 507 – 900) = 23 Hence the 13th observation = 23 Solution 30. From Graph: BC = CE + BE = (4 – 0) + (0 – (-2)) Base = BC = 4 – (-2) = 4 + 2 = 6 units Height = AD = OF – OE = 5 – 2 = 3 units ar (∆ ABC) = $$\frac { 1 }{ 2 }$$ x Base x height = $$\frac { 1 }{ 2 }$$ x 6 x 3 = $$\frac { 18 }{ 2 }$$ = 9 sq units ar (∆ ABC) = 9 sq. units. We hope the CBSE Sample Papers for Class 9 Maths Paper 1 help you. If you have any query regarding CBSE Sample Papers for Class 9 Maths Paper 1, drop a comment below and we will get back to you at the earliest.
2022-06-28 20:48:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6103207468986511, "perplexity": 2600.1971338174285}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103617931.31/warc/CC-MAIN-20220628203615-20220628233615-00138.warc.gz"}
https://www.esaral.com/q/the-probability-of-throwing-a-number-greater-than-2-with-a-fair-dice-is-98047
# The probability of throwing a number greater than 2 with a fair dice is Question: The probability of throwing a number greater than 2 with a fair dice is (a) $\frac{3}{5}$ (b) $\frac{2}{5}$ (c) $\frac{2}{3}$ (d) $\frac{1}{3}$ Solution: GIVEN: A dice is thrown once TO FIND: Probability of getting a number greater than 2. Total number on a dice is 6. Number greater than 2 is 3, 4, 5 and 6 Total number greater than 2 is 4 We know that PROBABILITY = Hence probability of getting a number greater than 2 is equal to Hence the correct option is
2023-03-20 13:26:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31424370408058167, "perplexity": 249.6720441801167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943483.86/warc/CC-MAIN-20230320114206-20230320144206-00525.warc.gz"}
http://ieeexplore.ieee.org/xpls/icp.jsp?reload=true&arnumber=6657693
Browse • Abstract SECTION I ## INTRODUCTION Photonic signal processing is attractive due to its high time-bandwidth capability, immunity to electromagnetic interference, and its potential to solve the limitations of electronic approaches. It can also process signals directly inside the fiber [1]. Photonic signal processors implemented using the infinite impulse response (IIR) optical delay line structures such as the amplified recirculating delay line (ARDL) loop [2] and the active-fibre Bragg-grating-pair cavity [3] can generate a large number of delayed optical signals to realize a high-resolution bandpass filter response, using only few optical components. However, these filters have a limited operating bandwidth because they have a periodic frequency response where the filter passband repeats at an integer multiple of the fundamental filter passband frequency. The separation between two successive passbands of the filter is referred to as the free spectral range (FSR) [4]. A large FSR is required for a filter to operate over a wide bandwidth. The filter FSR is determined by the time delay of the delayed optical signals, which in turn depends on the loop length of an IIR optical delay line structure. A large-FSR bandpass filter requires the IIR optical delay line structure to have a short loop length. The loop length of an IIR optical delay line structure is restricted by the components inside the delay line loop. An optical amplifier is required inside the delay line loop to compensate for the loop loss and to provide gain for generating a large number of delayed optical signals so that a high-resolution bandpass filter response can be obtained. A passive recirculating delay line with a short loop length of 2.8 cm has been fabricated on a lithium niobate waveguide [5]. With the inclusion of a few centimeter long erbium-doped waveguide amplifier [6], the length of the integrated ARDL loop can be 10 cm. This shows an IIR-based optical delay line signal processor can only realize a bandpass filter response with a maximum FSR of around 1.4 GHz even the device is integrated in a lithium niobate waveguide that has 2.14 refractive index. This makes the filter operating bandwidth to be less than 2.8 GHz, which is too low for many applications. Recently, Vernier effect [7] has been used in microwave photonic signal processing to increase the filter frequency response FSR [4], [8], [9], [10]. These filters are based on connecting two optical delay line structures in series with proper design on the delay time of each structure. While these structures have demonstrated the increase in the frequency response FSR, they either have limited resolution or generate an excessive amount of phase-induced intensity noise (PIIN) which limits the signal-to-noise ratio (SNR) [11] that makes them unusable in practice. Recently, we reported a technique, based on the use of a time compression unit, to increase the FSR of a microwave photonic bandpass filter [12]. This filter has no PIIN but only a low-resolution filter response was demonstrated. The aim of this paper is to present for the first time that the theoretical and experimental investigation of the noise presented in a new large-FSR high-resolution microwave photonic bandpass filter implemented using the Vernier effect. The filter features the advantages of PIIN-free and robust performance. Experimental results are presented that demonstrate a high-resolution bandpass filter with 31-fold increase in the frequency response FSR. The filter SNR performance are also measured and compared with a simple optical delay line structure. SECTION II ## FILTER TOPOLOGY Fig. 1 shows the topology of the new large-FSR high-resolution microwave photonic bandpass filter. The continuous wave (CW) light from a laser source is split by an optical coupler. The light at the first coupler output is intensity modulated by an optical modulator driven by an RF signal. The RF modulated optical signal at the modulator output circulates in the first frequency shifting amplified recirculating delay line (FS-ARDL) loop with a loop length $L_{1}$. This generates many different frequency delayed optical signals, which produce a coherence-free high-resolution bandpass filter response [13]. The delayed optical signals together with the CW light from one of the optical coupler outputs are launched into a wavelength converter formed by an optical circulator and a semiconductor optical amplifier (SOA). The function of the wavelength converter is to copy the processed RF information signal carried by the different frequency delayed optical signals into the CW light through the process of cross gain modulation (XGM) in the SOA [14]. The single frequency optical signal at the output of the wavelength converter has a coherence-free high-resolution bandpass filter response. It passes through an optical attenuator (Att) before launching into the second FS-ARDL loop with a loop length $L_{2}$. The optical attenuator is used to control the signal power into the second loop to avoid saturating the optical amplifier inside the second loop. The RF information signal at the second FS-ARDL output, which has been processed twice, is again copied to the CW light from one of the optical coupler outputs via the XGM effect in the SOA. The single frequency optical signal carrying the processed RF information signal launches into the third FS-ARDL with a loop length $L_{3}$ for signal processing for the third time, and is then detected by the photodetector. Fig. 1. Topology of the large-FSR high-resolution microwave photonic bandpass filter. According to the Vernier effect, the transfer function of the structure shown in Fig. 1 is the product of the transfer function of each FS-ARDL loop. The FSR of the overall structure is the least common multiple of the FSR of each FS-ARDL loop [7], which is given by TeX Source $$FSR = k_{1} FSR_{1} = k_{2} FSR_{2} = k_{3}FSR_{3}\eqno{\hbox{(1)}}$$ where $k_{m}$ is an integer and $FSR_{m}$ is the FSR of the frequency response produced by the $m$th FS-ARDL loop, which is given by TeX Source $$FSR_{m} = {c \over nL_{m}}\eqno{\hbox{(2)}}$$ where $c$ is the speed of light, $n$ is the fiber refractive index and $L_{m}$ is the loop length of the $m$th FS-ARDL. Therefore, in order to obtain a large increase in the FSR, the loop length of the FS-ARDLs shown in Fig. 1 need to be designed so that they are not an integer multiple of the others. The frequency shift $\Delta f$ in each loop shown in Fig. 1 can be the same or different to that in the other loops. It only needs to be three times larger than the maximum RF signal frequency in order to avoid the aliasing problem [1]. Since the delayed optical signals into the wavelength converters and into the photodetector have different optical frequencies (or wavelengths), the structure shown in Fig. 1 has no coherent interference and PIIN problems. Hence, a robust bandpass filter response can be obtained. It should be pointed out that a microwave photonic bandpass filter implemented using multiple ARDL loops has been reported [15]. However, the previously reported structure is not aimed to increase the frequency response FSR and to obtain a low-noise performance. It is aimed to improve the filter response skirt selectivity by designing all the ARDL loops to have the same length so that the filter passbands generated by the loops are aligned with each other. This is different to the large-FSR microwave photonic bandpass filter presented in this paper where the loops are designed to have different lengths, and consist of an optical frequency shifter for PIIN suppression. Moreover, until now there have been no reports on the investigation of the noise components generated by a multiple coherence-free optical delay line structure. SECTION III ## FILTER TRANSFER FUNCTION AND NOISE ANALYSIS The large-FSR high-resolution microwave photonic bandpass filter transfer function, which is defined as the ratio of the output and input RF signal voltage, is given by TeX Source $$H(f) = \prod_{m = 1}^{M} \left[(1 - \kappa_{m}) + \kappa_{m}^{2} \sum_{i = 1}^{N_{m}} g_{m}^{i} l_{m}^{i} (1 - \kappa_{m})^{i - 1} z_{m}^{- i}\right] \cdot \prod_{m = 1}^{M - 1} \left[\eta_{m} (f) G_{m} Att_{m}\right]\eqno{\hbox{(3)}}$$ where $M\ >\ 1$ is the number of the FS-ARDL loops, $N_{m}$ is the number of taps generated in the $m$th loop, $\kappa_{m}$ is the coupling ratio of the optical coupler in the $m$th loop, $g_{m}$ is the gain of the optical amplifier inside the $m$th loop, $l_{m}$ is the insertion loss of the optical frequency shifter inside the $m$th loop, $z_{m} = \exp(j2 \pi T_{m}f)$, $f$ is the RF frequency, $T_{m} = (nL_{m})/c$ is the delay time corresponding to the $m$th FS-ARDL loop length $L_{m}$, $\eta_{m}(f)$ is the XGM wavelength conversion efficiency of the $m$th wavelength converter, $G_{m}$ is the ratio of the optical power at the output and input of the $m$th wavelength converter, and $Att_{m}$ is the attenuation of the $m$th optical attenuator. The first term in (3) is the product of the $M$ FS-ARDL loop transfer functions. The second term is due to the XGM effect in the SOAs. Since SOAs can be designed to have a wide XGM bandwidth [14], the XGM wavelength conversion efficiency response is flat for the frequencies below 40 GHz. Therefore, the overall filter response shape is simply determined by the frequency responses of the $M$ FS-ARDL loops, which can be controlled by the system parameters such as the optical coupler coupling ratios, the optical amplifier gains and the loop lengths. As an example, we consider the design of a high-resolution microwave photonic bandpass filter that has a fundamental passband frequency at 4.275 GHz, a 500 kHz 3-dB passband width and an over 30 dB stopband rejection level. This cannot be realized by using the single-loop structure because the loop length of the FS-ARDL needs to be 3.3 cm, which is too short even the device is integrated in a lithium niobate waveguide. However, the dual-loop structure can be used to satisfy the design requirements. The first FS-ARDL loop length is designed to be 62.3 cm, which corresponds to an FSR of 225 MHz and is 19 times less than the designed filter FSR. The second FS-ARDL loop length is designed to be 55.7 cm, which corresponds to an FSR of 251.47 MHz and is 17 times less than the designed filter FSR. The two FS-ARDL loops have the same loop gain of $\kappa gl = 0.99$. The frequency responses of the two FS-ARDL loops and the combined response are shown in Fig. 2. This shows a 17-fold increase in the filter frequency response FSR can be achieved by using the dual-loop structure. The unwanted passbands are more than 30 dB below the wanted passband at 4.275 GHz. The filter 3-dB passband width is 500 kHz. Note that both the stopband rejection level and the 3-dB passband width of the dual-loop large-FSR microwave photonic bandpass filter are dependent on the loop gains of the FS-ARDLs. A higher stopband rejection level can be achieved by increasing the loop gain but this will alter the 3-dB passband width of the filter. By connecting another FS-ARDL loop after the dual-loop structure to form a triple FS-ARDL loop as shown in Fig. 1 can provide an extra degree of freedom in controlling the stopband rejection level and the 3-dB passband width of the filter. Fig. 3 shows the frequency response of the triple-loop structure having the loop lengths of 68.9 cm, 62.3 cm and 55.7 cm, and the same loop gain of 0.987 for the three loops. In this case, the filter 3-dB passband width remains 500 kHz while the filter stopband rejection level is increased to over 55 dB. Fig. 2. (a) Frequency responses of the first (solid) and second (dash) FS-ARDL microwave photonic bandpass filters with the loop lengths of 62.3 cm and 55.7 cm, respectively. The filter 3-dB passband widths are around 1.1 MHz. (b) Frequency response of the dual FS-ARDL large-FSR microwave photonic bandpass filter. The filter 3-dB passband width is 500 kHz. Fig. 3. Frequency response of the triple FS-ARDL large-FSR microwave photonic bandpass filter. The filter 3-dB passband width is 500 kHz. Fig. 4. (a) Simulated s-sp beat noise spectrums of the FS-ARDL microwave photonic bandpass filters with 225 MHz (solid) and 251.47 MHz (dash) FSR. (b) Simulated s-sp beat noise spectrum of the dual-loop large-FSR microwave photonic bandpass filter. Since the PIIN generated in the FS-ARDL is frequency shifted to be outside the RF information band, the signal-spontaneous (s-sp) beat noise generated by the optical amplifier inside the FS-ARDL loop is the dominant noise source in the system. It was found that the s-sp beat noise generated in the FS-ARDL is 40 dB below the PIIN generated in the conventional ARDL [13]. Therefore, the FS-ARDL structure has the ability to realize a bandpass filter response with a low-noise performance. In the case of the dual FS-ARDL loop structure, the CW light into the wavelength converter copies both the processed RF information signal and the s-sp beat noise generated in the first loop to a single frequency optical signal that launches into the second loop. The second loop filters the noise generated in the first loop as well as generates its own noise [16]. Therefore, the s-sp beat noise spectrum at the output of the dual FS-ARDL large-FSR microwave photonic bandpass filter is given by TeX Source $$S_{s - sp} (f) = S_{s - sp, 1} (f) \eta_{1} (f) G_{1} Att_{1} \left\vert H_{2} (f) \right\vert^{2} + S_{s - sp, 2}(f)\eqno{\hbox{(4)}}$$ where $S_{s - sp, m}(f)$ is the s-sp beat noise spectrum of the $m$th FS-ARDL [13], and $H_{m}(f)$ is the transfer function of the $m$th FS-ARDL and is given by TeX Source $$H_{m} (f) = (1 - \kappa_{m}) + \kappa_{m}^{2} \sum_{i = 1}^{N_{m}} g_{m}^{i}l_{m}^{i} (1 - \kappa_{m})^{i - 1}z_{m}^{-i}.\eqno{\hbox{(5)}}$$ In the case of the large-FSR microwave photonic bandpass filter having $M$ FS-ARDL loops, the power spectrum of the dominant s-sp beat noise is given by TeX Source $$S_{s - sp} (f) = \sum_{m = 1}^{M - 1} \left[S_{s - sp, m} (f) \prod_{n = m}^{M - 1} \eta_{n} (f) G_{n} Att_{n} \left\vert H_{n + 1} (f) \right\vert^{2}\right] + S_{s - sp, M} (f).\eqno{\hbox{(6)}}$$ Fig. 4 shows the s-sp beat noise spectrums of two single FS-ARDL loop microwave photonic bandpass filters with the loop lengths of 62.3 cm and 55.7 cm, respectively, and the s-sp beat noise spectrum of the large-FSR microwave photonic bandpass filter formed by two FS-ARDL loops. The frequency responses of these filters are shown in Fig. 2. The fundamental passband frequency of the dual FS-ARDL loop microwave photonic bandpass filter is 4.275 GHz. In order to compare the s-sp beat noise level generated in the single and dual FS-ARDL loop structures, the optical attenuator attenuation in the dual-loop structure is set to be TeX Source $$Att_{1} = {1 \over G_{1} \cdot \left\vert H_{1}(0)\right\vert^{2}}\eqno{\hbox{(7)}}$$ so that the average output optical power of the dual-loop structure is the same as that of the single-loop structure. It can be seen from Fig. 4(b) that the s-sp beat noise peaks generated in the first loop at 4.05 GHz and 4.5 GHz are filtered by the second loop. The s-sp beat noise at the passband of the dual-loop large-FSR microwave photonic bandpass filter is 3 dB higher than that of the single FS-ARDL microwave photonic bandpass filter. Since the average output optical power into the photodetector is the same for both cases, the SNR of the dual-loop large-FSR microwave photonic bandpass filter is only 3 dB lower than that of the single FS-ARDL microwave photonic bandpass filter. Hence, the advantage of low-noise performance in the FS-ARDL remains in the dual-loop structure. Most importantly, the FSR of the frequency response generated by the dual-loop structure is significantly increased compared to the single-loop structure. SECTION IV ## EXPERIMENTAL RESULTS An experiment was set up as shown in Fig. 5 to verify the concept of the large-FSR high-resolution microwave photonic bandpass filter. The optical source was a tunable external cavity laser. The laser wavelength was 1550 nm and the laser linewidth was less than 500 kHz. The CW light from the laser source was split by a 50 : 50 optical coupler. One of the optical coupler outputs was connected to a polarization controller followed by a quadrature biased electro-optic intensity modulator. The RF modulated optical signal at the modulator output was launched into an FS-ARDL, which was formed by a 50 : 50 optical coupler, an erbium-doped fibre amplifier (EDFA), an optical filter and a 250 MHz acousto-optic frequency shifter (AOFS). The delayed optical signals at the output of the FS-ARDL and the CW light from the laser source were fed into a wavelength converter, which comprised an optical circulator and a SOA. The variable optical attenuator (VOA) in front of the SOA was used to control the CW light power into the SOA to obtain a high XGM wavelength conversion efficiency. The output of the wavelength converter passed through a VOA before entering the second FS-ARDL. The second FS-ARDL was formed by the same components as in the first FS-ARDL except a 750 MHz AOFS was used instead of the 250 MHz AOFS. The 750 MHz AOFS had a 4 dB polarization dependence loss. Hence, polarization controllers at input of the second loop and inside the loop were used to ensure the recirculating optical signals were aligned to the frequency shifter input polarization. A variable optical delay line was inserted into the second FS-ARDL to adjust the loop length to increase the overall frequency response FSR. The delayed optical signals at the output of the second loop were detected by a photoreceiver, whose output was connected to a network analyzer to display the filter transfer characteristic. The loop lengths of the FS-ARDLs were obtained from the filter frequency response FSRs measured on the network analyzer with a 10 kHz resolution. By using (2) together with the measured FSRs, the loop lengths of the first and second FS-ARDLs were found to be 32.26 m and 24.78 m, respectively. Note that shorter loop lengths, which enable the filters to have larger FSRs, can be realized by using a linear SOA instead of an EDFA inside the loops. However, due to the lack of two linear SOAs, the large-FSR high-resolution microwave photonic bandpass filter was demonstrated using EDFAs. Fig. 5. Experimental setup of the large-FSR high-resolution microwave photonic bandpass filter. Fig. 6 shows the measured and simulated frequency responses of the dual-loop large-FSR microwave photonic bandpass filter. Excellent agreement can be seen. The filter has a sharp 3-dB passband width of 15 kHz and over 30 dB stopband rejection level. The filter frequency response FSR was 80.7 MHz. A larger FSR can be obtained by adjusting the length of the second loop. However, due to the aliasing problem, the maximum operating frequency of the FS-ARDL based microwave photonic bandpass filter was limited to one-third of the frequency shift, which was 83.3 MHz when using the 250 MHz AOFS. The filter frequency response was stable even the laser source had a narrow linewidth, which demonstrated the filter was free of coherent interference problem. The frequency responses of the first and second FS-ARDL loops were also measured and are shown in Fig. 7. The filter frequency response FSRs were 6.2 MHz and 8.07 MHz, respectively. This demonstrates 10-fold increase in the FSR can be achieved by using the dual FS-ARDL loop structure. Fig. 6. Measured (solid) and simulated (dots) frequency responses of the dual-loop large-FSR microwave photonic bandpass filter. (a) Wideband response and (b) detailed section of the response within 1.5 MHz of the passband frequency. Fig. 7. Measured frequency responses of the single FS-ARDL microwave photonic bandpass filters with 6.2 MHz (dash) and 8.07 MHz (solid) FSR. The SNRs of the single and dual FS-ARDL microwave photonic bandpass filters were measured by applying an RF signal at the filter passband frequency into the electro-optic intensity modulator. The output RF signal power and the output noise power were measured on an electrical spectrum analyzer connected to the photoreceiver. A low noise power was obtained at the output of the single and dual FS-ARDL microwave photonic bandpass filters demonstrating there was no PIIN. The measured single and dual FS-ARDL microwave photonic bandpass filter SNRs at the filter passband were 47.2 dB and 43.9 dB, respectively in a 100 kHz resolution bandwidth. This shows the SNR of the dual-loop structure is around 3 dB lower than that of the single-loop structure, which is agreed with the theoretical prediction. Although the dual FS-ARDL microwave photonic bandpass filter has a 3 dB lower SNR, it has a much larger frequency response FSR than the single FS-ARDL microwave photonic bandpass filter. Finally, experiments were carried out to demonstrate a larger FSR bandpass filter response can be obtained by using the dual-loop structure. This was achieved by replacing the 250 MHz AOFS in the first FS-ARDL loop with a single-sideband suppressed carrier (SSB-SC) modulator based frequency shifter [17]. The SSB-SC modulator based frequency shifter can provide a larger frequency shift compared to the AOFS, which enables the filter to be demonstrated over a wider frequency range without the aliasing problem. The SSB-SC modulator was set up to provide a 2 GHz frequency shift. The length of the first FS-ARDL loop was 29.7 m, which resulted in a bandpass filter response with an FSR of 6.73 MHz. Fig. 8 shows the frequency responses of the dual-loop large-FSR microwave photonic bandpass filter for the second FS-ARDL having a loop length of 26.592 m $(\hbox{FSR} = 7.52\ \hbox{MHz})$ and 27.917 m $(\hbox{FSR} = 7.164\ \hbox{MHz})$, respectively. It can be seen from the figure that the combined response has a passband at 127.9 MHz and 222.1 MHz, which demonstrated 17-fold and 31-fold increase in the bandpass filter response FSR. A microwave photonic bandpass filter with an even larger FSR can be realized by adjusting the second FS-ARDL loop length and by replacing the 750 MHz AOFS in the second loop with a SSB-SC modulator based frequency shifter that produces a larger frequency shift. A 40 GHz bandwidth SSB-SC modulator has been demonstrated [18] showing the large-FSR microwave photonic bandpass filter can be operated well into microwave frequencies. Note that the SSB-SC modulator is an electro-optic device and is implemented using lithium niobate technology. Hence, the FS-ARDL loop, which is formed by an optical coupler, an optical amplifier and a SSB-SC modulator based optical frequency shifter, can be fabricated on a lithium niobate waveguide [5], [6], [18]. This enables the large-FSR microwave photonic bandpass filter to be realized using two or more very small-delay high-frequency FS-ARDL modules. Fig. 8. Measured frequency responses of the dual-loop large-FSR microwave photonic bandpass filter for different second FS-ARDL loop lengths of (a) 26.592 m and (b) 27.917 m. SECTION V ## CONCLUSION The noise components in a new microwave photonic signal processing structure for realizing a large-FSR high-resolution bandpass filter response have been investigated. The filter is based on using the Vernier effect and the frequency shifting technique in an optical delay line structure. It solves, for the first time, both the limited FSR problem and the PIIN problem in the IIR-based optical delay line architectures. The power spectrum of the s-sp beat noise, which is the dominant noise source in the multiple FS-ARDL structure, has been analyzed and compared with the single FS-ARDL structure. Simulation results show the advantage of low-noise performance in the FS-ARDL remains in the dual-loop structure. Experimental results have been presented that demonstrate 31-fold increase in the FSR of a bandpass filter response. The results have also demonstrated a robust high-resolution bandpass filtering operation using a narrow-linewidth laser source. Furthermore, no PIIN was observed and high SNR performance was measured. The new photonic based filter offers bandpass filtering to microwave frequencies, which can be integrated in optical fiber microwave transmission systems. ## Footnotes This work was supported by the Australian Research Council. Corresponding author: E. H. W. Chan (e-mail: erwin.chan@sydney.edu.au). ## References No Data Available ## Cited By No Data Available None ## Multimedia No Data Available This paper appears in: No Data Available Issue Date: No Data Available On page(s): No Data Available ISSN: None INSPEC Accession Number: None Digital Object Identifier: None Date of Current Version: No Data Available Date of Original Publication: No Data Available
2014-09-30 20:17:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6409350037574768, "perplexity": 1611.7361373929816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663135.34/warc/CC-MAIN-20140930004103-00433-ip-10-234-18-248.ec2.internal.warc.gz"}
https://www.gamedev.net/forums/topic/322740-the-infamous-pong/
• Advertisement # The infamous pong This topic is 4620 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. If you intended to correct an error in the post then please contact us. ## Recommended Posts So, as you could imagine, I'm writing a game of pong. So far, I have everything drawn, albeit crudely, but drawing and user input is fine, and collision detection is done. Now, here's my slight problem. Since this is 2d, the ball (currently a white box), is set up with an angle, and each time everything is drawn, the box is advanced in its path, by however far, using trig. Whenever a collision with the top or bottom walls takes place, I have it bouncing back at a random angle using this code: if( /* top wall collision */ ){ // bounce // generate random angles until one is between 90 and 270 (downwards) float tempangle = ((float)(rand()/(float)RAND_MAX))*360.0f; while( tempangle < 90.0f && tempangle > 270.0f ) tempangle = ((float)(rand()/(float)RAND_MAX))*360.0f; ((CBall*)game_items[ball_index])->setAngle(tempangle); // set new angle } else if( /* bottom wall collision */ ){ // bounce // generate random angles until one is between 270 and 90 (upwards) float tempangle = ((float)(rand()/(float)RAND_MAX))*360.0f; while( tempangle > 90.0f && tempangle < 270.0f ) tempangle = ((float)(rand()/(float)RAND_MAX))*360.0f; ((CBall*)game_items[ball_index])->setAngle(tempangle); // set new angle } And with that, sometimes the ball will hit the top at an angle of 45 degrees and bounce backwards. I'm looking for a better way to generate the angle than by looking again the the incoming angle and make sure that, instead of the angle being in the right half, the angle being in the right quadrant. It seems a little inelegant. Is there a better way to do it that won't result in completely predicatable angles? Also, is there a better function to use than rand()? Even though I'm seeding it with 'time(NULL)', it practically always has the same first 2 or 3 bounces. #### Share this post ##### Share on other sites Advertisement Store your velocity as a vector, or two variables xSpeed and ySpeed. When you hit the top or bottom wall, negate ySpeed. That's all there is to it. #### Share this post ##### Share on other sites Hey, I'd do what wendigo23 suggested, but if you want to have some randomness to the bounce, add or subtract some angle from the ball after you have its new direction. CJM #### Share this post ##### Share on other sites Just for some efficiency you could use this to get a random number between min and max. min+float(max-min)*rand()/RAND_MAX #### Share this post ##### Share on other sites naturally if your using rand() youll nees to seed it first so it is actualy always random.. srand((unsigned int)time(0)); then you can call rand() % (high-low) + low i suggest if youre going to make random angles, to cap what values you generate so you cant hit at 45 degs and bounc back....or handle it as two vectors, negate the y movement and add do x movement + rand() % (3-0) - 1.5 that way you get a random number between -1.5 and 1.5 which will change your angle....but beware....this method without capping the x movement could eventually lead to the ball bouncing at rediculous angles... either way you do it...youll need to set a cap...either a max angle or max speed etc...hope this helps #### Share this post ##### Share on other sites • Advertisement • Advertisement • ### Popular Tags • Advertisement • ### Popular Now • 10 • 11 • 11 • 11 • 9 • Advertisement
2018-01-23 16:29:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1901349127292633, "perplexity": 2968.7676153914626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891980.75/warc/CC-MAIN-20180123151545-20180123171545-00314.warc.gz"}
https://www.researcher-app.com/paper/274805
3 years ago # Combined Constraints on the Equation of State of Dense Neutron-Rich Matter from Terrestrial Experiments and Observations of Neutron Stars. Bao-An Li, Jun Xu, Nai-Bo Zhang Within the parameter space of equation of state (EOS) of dense neutron-rich matter limited by existing constraints mainly from terrestrial nuclear experiments, we investigate how the neutron star maximum mass $M_{\rm{max}}>2.01\pm0.04$ M$_\odot$, radius $10.62<R_{\rm{1.4}}< 12.83$ km and tidal deformability $\Lambda_{1.4}\leq800$ of canonical neutron stars all together constrain the EOS of dense neutron-rich nucleonic matter. While the 3-D parameter space of $K_{\rm{sym}}$ (curvature of nuclear symmetry energy), $J_{\rm{sym}}$ and $J_0$ (skewness of the symmetry energy and EOS of symmetric nuclear matter (SNM), respectively) are narrowed down significantly by the observational constraints, more data are needed to pin down the individual values of $K_{\rm{sym}}$, $J_{\rm{sym}}$ and $J_0$. The $J_0$ largely controls the maximum mass of neutron stars. While the EOS with $J_0=0$ is sufficiently stiff to support neutron stars as massive as 2.37 M$_{\odot}$, to support the ones as massive as 2.74 M$_{\odot}$ (composite mass of GW170817) requires $J_0$ to be larger than its currently known maximum value of about 400 MeV. The upper limit on the tidal deformability of $\Lambda_{1.4}=800$ from the recent observation of GW170817 is found to provide upper limits on some EOS parameters consistent with but less restrictive than the existing constraints of other observables studied. Publisher URL: http://arxiv.org/abs/1801.06855 DOI: arXiv:1801.06855v1 You might also like Discover & Discuss Important Research Keeping up-to-date with research can feel impossible, with papers being published faster than you'll ever be able to read them. That's where Researcher comes in: we're simplifying discovery and making important discussions happen. With over 19,000 sources, including peer-reviewed journals, preprints, blogs, universities, podcasts and Live events across 10 research areas, you'll never miss what's important to you. It's like social media, but better. Oh, and we should mention - it's free. Researcher displays publicly available abstracts and doesn’t host any full article content. If the content is open access, we will direct clicks from the abstracts to the publisher website and display the PDF copy on our platform. Clicks to view the full text will be directed to the publisher website, where only users with subscriptions or access through their institution are able to view the full article.
2022-08-15 10:54:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5813255906105042, "perplexity": 1857.535252109033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572163.61/warc/CC-MAIN-20220815085006-20220815115006-00170.warc.gz"}
https://faculty.math.illinois.edu/Macaulay2/doc/Macaulay2-1.20/share/doc/Macaulay2/Macaulay2Doc/html/_hh.html
# hh -- Hodge numbers of a smooth projective variety ## Synopsis • Usage: hh^(p,q)(X) • Inputs: • a pair (p,q) of non negative integers • Outputs: ## Description The command computes the Hodge numbers h^{p,q} of the smooth projective variety X. They are calculated as HH^q(cotangentSheaf(p,X)) As an example we compute h^11 of a K3 surface (Fermat quartic in projective threespace: i1 : X = Proj(QQ[x_0..x_3]/ideal(x_0^4+x_1^4+x_2^4+x_3^4)) o1 = X o1 : ProjectiveVariety i2 : hh^(1,1)(X) o2 = 20 ## Caveat There is no check made if the projective variety X is smooth or not.
2022-11-28 04:19:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7008843421936035, "perplexity": 5284.137882690859}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710473.38/warc/CC-MAIN-20221128034307-20221128064307-00718.warc.gz"}
https://ai.stackexchange.com/questions/22637/how-graph-convolutional-neural-networks-forward-propagate
How Graph Convolutional Neural Networks forward propagate? Here we aggregate the information from the adjacent node and pass it to a neural network, then transform our own information and add them all. But the main question is: how can we ensure that $$W_{k}(\sum(\frac{h_k}{N(V)})$$ will be the same size as $$B_{k}h_{v}$$ and does $$B_{k}$$ emply another neural network?
2022-01-28 17:40:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20925384759902954, "perplexity": 253.95261935103548}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306301.52/warc/CC-MAIN-20220128152530-20220128182530-00467.warc.gz"}
https://stats.stackexchange.com/questions/263017/neural-network-counter-intuitive-result-from-a-binary-classification-perhaps
# Neural Network : Counter-intuitive result from a binary classification, perhaps due to experimental design? I am setting up a binary classification problem where if a trade is profitable it gets a 1, and if it is unprofitable, it gets a 0. I have some independent variables, one of which is price_of_security. I have the dependent variable of trade_profit. I have data on about 20,000 such trades. I train a neural network on this data and then save the model. Next, I take 1 data point and make ten copies of it with increments of the price going down, and increments of the price going up, I hold all of the other independent variables constant. Like so: |ticker| price| |------|-------| | WTO| 216.94| | WTO| 217.94| | WTO| 218.94| | WTO| 219.94| | WTO| 220.94| | WTO| 221.94| # This is the original price | WTO| 222.94| | WTO| 223.94| | WTO| 224.94| | WTO| 225.94| I then put this data into my saved neural network and see if the neural network associated a lower price with a higher chance of it being a successful trade. I believe that it intuitively should associate a cheaper price with a higher chance of success and also associate higher prices with lower chances of success. My reasoning is that if the fair value of a security is 215, then my model should not recommend a buy at 220 and it should recommend a buy at 210. However, this is what the model predicts: |ticker| price| probability_1| label| |------|-------|--------------|------| | WTO| 216.94| 0.32| 0| | WTO| 217.94| 0.37| 0| | WTO| 218.94| 0.42| 0| | WTO| 219.94| 0.44| 0| | WTO| 220.94| 0.49| 0| | WTO| 221.94| 0.51| 1| | WTO| 222.94| 0.56| 1| | WTO| 223.94| 0.57| 1| | WTO| 224.94| 0.58| 1| | WTO| 225.94| 0.60| 1| The model, however, finds that higher prices are associated with a higher chance of the trade being a 1 (aka a successful trade). I am trying to understand what is going on so I did a table of all the data I have to see what the model is learning. This is the table: | bin | num trades | ratio succes| |-----------|--------------|-------------| | 200-205 | 568| 0.85| | 205-210 | 754| 0.79| | 210-215 | 1200| 0.78| | 215-220 | 4452| 0.55| | 220-225 | 6783| 0.59| | 225-230 | 8197| 0.68| After a lot of thinking, I think the reason that I'm seeing the model associate a higher chance of success with a higher price is because there are so more (in total) successful trades in the higher-prices. Perhaps those trades make less money per trade, but the trained model only cares about the 0 or the 1, not about the magnitude of the profit. I think that is why I'm seeing this counter-intuitive results. Should this experiment be set up as a regression problem rather than a classification problem?
2021-03-07 09:18:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4478999376296997, "perplexity": 3475.46489613407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376206.84/warc/CC-MAIN-20210307074942-20210307104942-00337.warc.gz"}
https://www.jobilize.com/online/course/4-6-exponential-and-logarithmic-equations-by-openstax?qcr=www.quizover.com&page=4
# 4.6 Exponential and logarithmic equations  (Page 5/8) Page 5 / 8 To check the result, substitute $\text{\hspace{0.17em}}x=10\text{\hspace{0.17em}}$ into $\text{\hspace{0.17em}}\mathrm{log}\left(3x-2\right)-\mathrm{log}\left(2\right)=\mathrm{log}\left(x+4\right).$ ## Using the one-to-one property of logarithms to solve logarithmic equations For any algebraic expressions $\text{\hspace{0.17em}}S\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}T\text{\hspace{0.17em}}$ and any positive real number $\text{\hspace{0.17em}}b,$ where $\text{\hspace{0.17em}}b\ne 1,$ Note, when solving an equation involving logarithms, always check to see if the answer is correct or if it is an extraneous solution. Given an equation containing logarithms, solve it using the one-to-one property. 1. Use the rules of logarithms to combine like terms, if necessary, so that the resulting equation has the form $\text{\hspace{0.17em}}{\mathrm{log}}_{b}S={\mathrm{log}}_{b}T.$ 2. Use the one-to-one property to set the arguments equal. 3. Solve the resulting equation, $\text{\hspace{0.17em}}S=T,$ for the unknown. ## Solving an equation using the one-to-one property of logarithms Solve $\text{\hspace{0.17em}}\mathrm{ln}\left({x}^{2}\right)=\mathrm{ln}\left(2x+3\right).$ Solve $\text{\hspace{0.17em}}\mathrm{ln}\left({x}^{2}\right)=\mathrm{ln}1.$ $x=1\text{\hspace{0.17em}}$ or $\text{\hspace{0.17em}}x=-1$ ## Solving applied problems using exponential and logarithmic equations In previous sections, we learned the properties and rules for both exponential and logarithmic functions. We have seen that any exponential function can be written as a logarithmic function and vice versa. We have used exponents to solve logarithmic equations and logarithms to solve exponential equations. We are now ready to combine our skills to solve equations that model real-world situations, whether the unknown is in an exponent or in the argument of a logarithm. One such application is in science, in calculating the time it takes for half of the unstable material in a sample of a radioactive substance to decay, called its half-life . [link] lists the half-life for several of the more common radioactive substances. Substance Use Half-life gallium-67 nuclear medicine 80 hours cobalt-60 manufacturing 5.3 years technetium-99m nuclear medicine 6 hours americium-241 construction 432 years carbon-14 archeological dating 5,715 years uranium-235 atomic power 703,800,000 years We can see how widely the half-lives for these substances vary. Knowing the half-life of a substance allows us to calculate the amount remaining after a specified time. We can use the formula for radioactive decay: $\begin{array}{l}A\left(t\right)={A}_{0}{e}^{\frac{\mathrm{ln}\left(0.5\right)}{T}t}\hfill \\ A\left(t\right)={A}_{0}{e}^{\mathrm{ln}\left(0.5\right)\frac{t}{T}}\hfill \\ A\left(t\right)={A}_{0}{\left({e}^{\mathrm{ln}\left(0.5\right)}\right)}^{\frac{t}{T}}\hfill \\ A\left(t\right)={A}_{0}{\left(\frac{1}{2}\right)}^{\frac{t}{T}}\hfill \end{array}$ where • ${A}_{0}\text{\hspace{0.17em}}$ is the amount initially present • $T\text{\hspace{0.17em}}$ is the half-life of the substance • $t\text{\hspace{0.17em}}$ is the time period over which the substance is studied • $y\text{\hspace{0.17em}}$ is the amount of the substance present after time $\text{\hspace{0.17em}}t$ ## Using the formula for radioactive decay to find the quantity of a substance How long will it take for ten percent of a 1000-gram sample of uranium-235 to decay? x=-b+_Гb2-(4ac) ______________ 2a I've run into this: x = r*cos(angle1 + angle2) Which expands to: x = r(cos(angle1)*cos(angle2) - sin(angle1)*sin(angle2)) The r value confuses me here, because distributing it makes: (r*cos(angle2))(cos(angle1) - (r*sin(angle2))(sin(angle1)) How does this make sense? Why does the r distribute once so good abdikarin this is an identity when 2 adding two angles within a cosine. it's called the cosine sum formula. there is also a different formula when cosine has an angle minus another angle it's called the sum and difference formulas and they are under any list of trig identities How can you tell what type of parent function a graph is ? generally by how the graph looks and understanding what the base parent functions look like and perform on a graph William if you have a graphed line, you can have an idea by how the directions of the line turns, i.e. negative, positive, zero William y=x will obviously be a straight line with a zero slope William y=x^2 will have a parabolic line opening to positive infinity on both sides of the y axis vice versa with y=-x^2 you'll have both ends of the parabolic line pointing downward heading to negative infinity on both sides of the y axis William y=x will be a straight line, but it will have a slope of one. Remember, if y=1 then x=1, so for every unit you rise you move over positively one unit. To get a straight line with a slope of 0, set y=1 or any integer. Aaron yes, correction on my end, I meant slope of 1 instead of slope of 0 William what is f(x)= I don't understand Joe Typically a function 'f' will take 'x' as input, and produce 'y' as output. As 'f(x)=y'. According to Google, "The range of a function is the complete set of all possible resulting values of the dependent variable (y, usually), after we have substituted the domain." Thomas Sorry, I don't know where the "Â"s came from. They shouldn't be there. Just ignore them. :-) Thomas Darius Thanks. Thomas  Thomas It is the  that should not be there. It doesn't seem to show if encloses in quotation marks. "Â" or 'Â' ...  Thomas Now it shows, go figure? Thomas what is this? i do not understand anything unknown lol...it gets better Darius I've been struggling so much through all of this. my final is in four weeks 😭 Tiffany this book is an excellent resource! have you guys ever looked at the online tutoring? there's one that is called "That Tutor Guy" and he goes over a lot of the concepts Darius thank you I have heard of him. I should check him out. Tiffany is there any question in particular? Joe I have always struggled with math. I get lost really easy, if you have any advice for that, it would help tremendously. Tiffany Sure, are you in high school or college? Darius Hi, apologies for the delayed response. I'm in college. Tiffany how to solve polynomial using a calculator So a horizontal compression by factor of 1/2 is the same as a horizontal stretch by a factor of 2, right? The center is at (3,4) a focus is at (3,-1), and the lenght of the major axis is 26 The center is at (3,4) a focus is at (3,-1) and the lenght of the major axis is 26 what will be the answer? Rima I done know Joe What kind of answer is that😑? Rima I had just woken up when i got this message Joe Rima i have a question. Abdul how do you find the real and complex roots of a polynomial? Abdul @abdul with delta maybe which is b(square)-4ac=result then the 1st root -b-radical delta over 2a and the 2nd root -b+radical delta over 2a. I am not sure if this was your question but check it up Nare This is the actual question: Find all roots(real and complex) of the polynomial f(x)=6x^3 + x^2 - 4x + 1 Abdul @Nare please let me know if you can solve it. Abdul I have a question juweeriya hello guys I'm new here? will you happy with me mustapha The average annual population increase of a pack of wolves is 25. how do you find the period of a sine graph Period =2π if there is a coefficient (b), just divide the coefficient by 2π to get the new period Am if not then how would I find it from a graph Imani by looking at the graph, find the distance between two consecutive maximum points (the highest points of the wave). so if the top of one wave is at point A (1,2) and the next top of the wave is at point B (6,2), then the period is 5, the difference of the x-coordinates. Am you could also do it with two consecutive minimum points or x-intercepts Am I will try that thank u Imani Case of Equilateral Hyperbola ok Zander ok Shella f(x)=4x+2, find f(3) Benetta f(3)=4(3)+2 f(3)=14 lamoussa 14 Vedant pre calc teacher: "Plug in Plug in...smell's good" f(x)=14 Devante 8x=40 Chris Explain why log a x is not defined for a < 0 the sum of any two linear polynomial is what Momo how can are find the domain and range of a relations the range is twice of the natural number which is the domain Morolake
2019-09-17 14:51:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 18, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7177051305770874, "perplexity": 840.4206796265174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573080.8/warc/CC-MAIN-20190917141045-20190917163045-00554.warc.gz"}
https://mathshistory.st-andrews.ac.uk/Biographies/Baker_Alan/
# Alan Baker ### Quick Info Born 19 August 1939 London, England Died 4 February 2018 Cambridge, England Summary Alan Baker was an English mathematician, known for his work in number theory. ### Biography Alan Baker was educated at Stratford Grammar School. From there, after winning a State Scholarship, he entered University College London where he studied for his B.Sc. He was awarded a B.Sc. with First Class Honours in Mathematics in 1961. He moved on to Trinity College Cambridge where he was awarded an M.A. Continuing his research at Cambridge advised by Harold Davenport, Baker began publishing papers. In fact eight of his papers had appeared in print before he submitted his doctoral dissertation: Continued fractions of transcendental numbers (1962); On Mahler's classification of transcendental numbers (1964); Rational approximations to certain algebraic numbers (1964); On an analogue of Littlewood's Diophantine approximation problem (1964); Approximations to the logarithms of certain rational numbers (1964); Rational approximations to the cube root of 2 and other algebraic numbers (1964); Power series representing algebraic functions (1965); and On some Diophantine inequalities involving the exponential function (1965). He received his doctorate from the University of Cambridge for his thesis Some Aspects of Diophantine Approximation in 1965. In the same year he was elected a Fellow of Trinity College. He spent the academic year 1964-65 at the Department of Mathematics, University College London. From 1964 to 1968 Baker was a research fellow at Cambridge, then becoming Director of Studies in Mathematics, a post which he held from 1968 until 1974 when he was appointed Professor of Pure Mathematics. During his career at Cambridge he spent time in the United States, becoming a member of the Institute for Advanced Study at Princeton in 1970 and visiting professor at Stanford in 1974. He also a visiting professor at the University of Hong Kong in 1988, at the Eidgenössische Technische Hochschule Zürich in 1989, and at the Mathematical Sciences Research Institute, Berkeley, California in 1993. Baker was awarded a Fields Medal in 1970 at the International Congress at Nice. This was awarded for his work on Diophantine equations. This is described by Paul Turán in [11], who first gives the historical setting:- The theory of transcendental numbers, initiated by Liouville in 1844, has been enriched greatly in recent years. Among the relevant profound contributions are those of Alan Baker, Wolfgang M Schmidt, and Vladimir Gennadievich Sprindzuk. Their work moves in important directions which contrast with the traditional concentration on the deep problem of finding significant classes of finding functions assuming transcendental values for all non-zero algebraic values of the independent variable. Among these, Baker's have had the heaviest impact on other problems in mathematics. Perhaps the most significant of these impacts has been the application to Diophantine equations. This theory, carrying a history of more than one thousand years, was, until the early years of this century, little more than a collection of isolated problems subjected to ingenious ad hoc methods. It was Axel Thue who made the breakthrough to general results by proving in 1909 that all Diophantine equations of the form $f (x, y) = m$ where m is an integer and f is an irreducible homogeneous binary form of degree at least three, with integer coefficients, have at most finitely many solutions in integers. Turán goes on to say that Carl Siegel and Klaus Roth generalised the classes of Diophantine equations for which these conclusions would hold and even bounded the number of solutions. Baker however went further and produced results which, at least in principle, could lead to a complete solution of this type of problem. He proved that for equations of the type $f (x, y) = m$ described above there was a bound $B$ which depended only on $m$ and the integer coefficients of $f$ with $max(|x_{0}|, |y_{0}|) ≤ B$ for any solution $(x_{0}, y_{0})$ of $f (x, y) = m$. Of course this means that only a finite number of possibilities need to be considered so, at least in principle, one could determine the complete list of solutions by checking each of the finite number of possible solutions. Baker also made substantial contributions to Hilbert's seventh problem which asked whether or not $a^{q}$ was transcendental when $a$ and $q$ are algebraic. Hilbert himself remarked that he expected this problem to be harder than the solution of the Riemann conjecture. However it was solved independently by Aleksandr Gelfond and Theodor Schneider in 1934 but Baker ([6]):- ... succeeded in obtaining a vast generalisation of the Gelfond-Schneider Theorem ... From this work he generated a large category of transcendental numbers not previously identified and showed how the underlying theory could be used to solve a wide range of Diophantine problems. Turán [11] concludes with these remarks:- I remark that [Baker's] work exemplifies two things very convincingly. Firstly, that beside the worthy tendency to start a theory in order to solve a problem it pays also to attack specific difficult problems directly. ... Secondly, it shows that a direct solution of a deep problem develops itself quite naturally into a healthy theory and gets into early and fruitful contact with significant problems of mathematics. Among Baker's famous books are: Transcendental number theory (1975), Transcendence theory : advances and applications (1977), A concise introduction to the theory of numbers (1984), (with Gisbert Wüstholz) Logarithmic forms and Diophantine geometry (2007), and A Comprehensive Course in Number Theory (2012). We quote from the introduction to Transcendental number theory (1975):- The study of transcendental numbers... has now developed into a fertile and extensive theory, enriching widespread branches of mathematics. My aim has been to provide a comprehensive account of the recent major discoveries in the field. Classical aspects of the subject are discussed in the course of the narrative. Proofs in the subject tend... to be long and intricate, and thus it has been necessary to select for detailed treatment only the most fundamental results; moreover, generally speaking, emphasis has been placed on arguments which have led to the strongest propositions known to date or have yielded the widest application. Robert Tijdeman writes in a review of this book:- The author has succeeded in his plan. This book gives a survey of the highlights of transcendental number theory, in particular of the author's own important contributions for which he was awarded a Fields medal in 1970. It is a very useful publication for mathematicians who want to obtain a general insight into transcendence theory, its techniques and its applicability. The style is extremely condensed, but there are many references for more detailed study. The presentation is very well done. This book is also reviewed by Heini Halberstam (1926-2014) who writes [7]:- Within the space of a mere 130 pages the author gives a panoramic account of modern transcendence theory, based on his own Adams Prize essay. The fact that this is now "a fertile and extensive theory, enriching wide-spread branches of mathematics" is due in large measure to the author himself, who was awarded in 1970 a Fields Medal (the Nobel Prize of mathematics) for his contributions. The prose is clear and economical yet interspersed with flashes of colour that convey a sense of personality; and each chapter begins with a helpful summary of the subsequent matter. The mathematical argument at all stages is highly condensed, as, indeed, is inevitable in a short research monograph covering so much ground. One might reproach the author for not having been more merciful to the beginner; but even a beginner can gain from the book a clear impression of what are the major achievements to date in this profoundly difficult field and which are the outstanding problems, while for others there is here a wealth of material for numerous fruitful study-groups. Don Redmond, reviewing Baker's 1984 "Concise Introduction", writes:- Many books do not live up to their titles, but this is one that definitely does. The book is very concise and would be a nice reference, since it covers the key points of a standard course, but the reviewer is not sure that one could use it as the sole textbook of a first course in number theory. David Singmaster, also reviewing the "Concise Introduction", writes [10]:- Introductions to number theory are numerous, so any new introduction must be examined for novelty. This book is the material for a lecture course at the University of Cambridge. Consequently, "concise" is no exaggeration. ... Overall, the book is a marvel of condensation. This would be true even if all 91 pages of text were devoted to the main material, but he has condensed further and uses about 30 pages for his supplementary material. This contains the most useful summary of current number theory that I have seen. There is a competent index so one can locate the results. ... I would recommend this book to any serious undergraduate wanting a survey of the field, but I would warn him that the proofs require close attention. Anyone with some background in number theory will highly appreciate Baker's exposition of current knowledge Yuri Bilu states in a review of Baker and Wüstholz's 2007 book Logarithmic forms and Diophantine geometry:- This long-awaited book is an introduction to the classical work of Baker, Masser and Wüstholz in a form suitable for both undergraduate and graduate students. ... This book is indeed an introduction. Its purpose is to teach principles while avoiding technicalities. This imposes certain limitations on the content. The authors treat in great detail the qualitative theory for the multiplicative group, but do not say much on the quantitative aspect, and only briefly mention abelian varieties. However, this book gives the necessary intuitive background to study the original journal articles of Baker, Masser, Wüstholz and others on the above-listed subjects. Baker also edited the important New advances in transcendence theory (1988) and wrote the important survey with Gisbert Wüstholz entitled Number theory, transcendence and Diophantine geometry in the next millennium. This is a survey of achievements and open problems in transcendence theory and related mathematics. In 1999 a conference was organised in Zürich to celebrate Baker's 60th birthday. Most of the lectures given at the meetings were published in A Panorama in Number Theory or The View from Baker's Garden (2002). The Introduction to the book begins as follows:- The millennium, together with Alan Baker's 60th birthday offered a singular occasion to organize a meeting in number theory and to bring together a leading group of international researchers in the field; it was generously supported by ETH Zürich together with the Forschungsinstitut für Mathematik. This encouraged us to work out a programme that aimed to cover a large spectrum of number theory and related geometry with particular emphasis on Diophantine aspects. ... The London Mathematical Society was represented by its President, Professor Martin Taylor, and it sent greetings to Alan Baker on the occasion of his 60th birthday. In [5] Baker makes remarks on the history of number theory, in particular on transcendental numbers. We quote from his paper:- Well, what does this tell us about the historical evolution of mathematics? First it is clear that a very important role has been played by a few key problems, centres of attraction, in Professor Dieudonné's terminology. This may be more true of number theory than other branches of mathematics but I believe that all good work has been guided to some extent by such centres. The general trend of the particular field that I have been discussing is difficult to summarise, since it has involved in its development many novel twists and turns; but one obvious element in the evolution has been the successful blending, or fusion, of ideas from number theory and algebra with the progressively wider use of classical function theory. And it is this convergence of diverse concepts that forms the essential ingredient, I believe, in the creation of an active theory. According to Professor Dieudonné, the study of transcendental numbers is only just on its way to becoming a "method". Given, however, the diverse nature of the problems which it has been instrumental in solving, there seems little doubt that it reached the latter stage several years ago, and it would appear, in fact that it is already on the path of becoming, in Professor Dieudonné's language, a centre of radiation. Here are Baker's research interests as given on his University of Cambridge page [9] (consulted in January 2014):- Baker's Theorem on the linear independence of logarithms of algebraic numbers has been the key to a vast range of developments in number theory over the past thirty years. Amongst the most significant are applications to the effective solution of Diophantine equations, to the resolution of class-number problems, to the theory of p-adic L-functions and especially, through works of Masser and Wüstholz, to many deep aspects of arithmetical algebraic geometry. The theory continues to be a source of much fruitful research to the present day. Baker has received many honours for his mathematical contributions in addition to the 1970 Fields medal. These include the award of the Adams prize from the University of Cambridge (1972) and election to the Royal Society of London (1973). He was awarded an honorary doctorate from Université Louis Pasteur Strasbourg (1998), made an honorary fellow of University College London (1979), a foreign fellow of the Indian Academy of Science (1980), foreign fellow of the National Academy of Sciences India (1993), a member of the Academia Europaea (1998), and an honorary member of the Hungarian Academy of Sciences (2001). Outside of mathematics, Baker lists his interests as travel, photography and the theatre. ### References (show) 1. Biography in Encyclopaedia Britannica. http://www.britannica.com/biography/Alan-Baker 2. Alan Baker, Heidelberg Laureate Forum. http://www.heidelberg-laureate-forum.org/blog/laureate/alan-baker/ 3. A Baker, Effective methods in the theory of numbers, Actes du Congrès International des Mathématiciens, Nice, 1970 Vol. 1 (Paris, 1971). 4. A Baker, Effective methods in the theory of numbers/Diophantine problems, in M Atiyah and D Iagolnitzer (eds.), Fields Medallists Lectures (World Sci. Publ., Singapore, 1997), 190-193. 5. A Baker, Some historical remarks on number theory, Historia Mathematica 2 (1975), 549-553. 6. Alan Baker, in M Atiyah and D Iagolnitzer (eds.), Fields Medallists Lectures (World Sci. Publ., Singapore, 1997), 161. 7. H Halberstam, Review: Transcendental Number Theory, by Alan Baker, The Mathematical Gazette 59 (410) (1975), 280-282. 8. J C Peral, Alan Baker: transcendental work (Spanish), Gac. R. Soc. Mat. Esp. 4 (2) (2001), 437-445. 9. Professor Alan Baker, Department of Pure Mathematics and Mathematical Statistics, University of Cambridge (2014). https://www.dpmms.cam.ac.uk/people/ab10005/ 10. D Singmaster, Review: A Concise Introduction to the Theory of Numbers, by Alan Baker, The Mathematical Gazette 69 (450) (1985), 318-319. 11. P Turán, On the work of Alan Baker, Actes du Congrès International des Mathématiciens, Nice, 1970 Vol. 1 (Paris, 1971), 3-5. 12. P Turán, On the work of Alan Baker, in M Atiyah and D Iagolnitzer (eds.), Fields Medallists Lectures (World Sci. Publ., Singapore, 1997), 157-159.
2022-07-03 03:10:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 11, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46596619486808777, "perplexity": 865.2606817371903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104209449.64/warc/CC-MAIN-20220703013155-20220703043155-00551.warc.gz"}
http://math.stackexchange.com/questions/132528/function-for-block-sensitivity-comparison-with-sensitivity
# Function for Block Sensitivity comparison with Sensitivity I need to find function $f$ where Block Sensitivity is larger than its Sensitivity $bs(f) > s(f)$ For example sortedness function where: $0000$,$0001$,$0011$, $0111$, $1111$, $1110$, $1100$, $1000$ gives 1 and otherwise $0$. $bs(f)$ would be $3$ and $s(f)$ would be $2$. I am looking for more such simple examples, possibly where bs difference is even larger. - Could you give (or link to) definitions of "block sensitivity" and "sensitivity"? –  Henning Makholm Apr 16 '12 at 15:05 And what does this have to do with complex analysis? –  Henning Makholm Apr 16 '12 at 15:06 technologyreview.com/blog/post.aspx?bid=349&bpid=25470 this is quite nice article –  rskuja Apr 16 '12 at 21:43
2014-12-19 23:53:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9205247759819031, "perplexity": 979.1901804272181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768982.112/warc/CC-MAIN-20141217075248-00097-ip-10-231-17-201.ec2.internal.warc.gz"}
http://manpages.ubuntu.com/manpages/eoan/man1/hovercraft.1.html
Provided by: hovercraft_2.6-4ubuntu1_all #### NAME hovercraft - Hovercraft! Documentation Contents: #### INTRODUCTION GUI tools are limiting I used to do presentations with typical slideshow software, such as OpenOffice/LibreOffice Impress, but these tools felt restricted and limiting. I need to do a lot of reorganizing and moving around, and that might mean changing things from bullet lists to headings to text to pictures and back to bullet lists over again. This happens through the whole process. I might realize something that was just a bullet point needs to be a slide, or that a set of slides for time reasons need to be shortened down to bullet points. Much of the reorganization comes from seeing what fits on one slide and what does not, and how I need to pace the presentation, and to some extent even what kinda of pictures I can find to illustrate what I try to say, and if the pictures are funny or not. Presentation software should give you complete freedom to reorganize your presentation on every level, not only by reorganizing slides. The solution for me and many others, is to use a text-markup language, like reStructuredText, Markdown or similar, and then use a tool that generates an HTML slide show from that. Text-markup gives you the convenience and freedom to quickly move parts around as you like. I chose reStructuredText, because I know it and because it has a massive feature set. When I read the documentations of other text-markup langages it was not obvious if they has the features I needed or not. Pan, rotate and zoom The tools that exist to make presentations from text-markup will make slideshows that has a sequence of slides from left to right. But the fashion now is to have presentations that rotate and zoom in and out. One open source solution for that is impress.js. With impress.js you can make modern cool presentations. But impress.js requires you to write your presentation as HTML, which is annoying, and the markup isn't flexible enough to let you quickly reorganize things from bullet points to You also have to position each slide separately, and if you insert a new slide in the middle, you have to reposition all the slides that follow. Hovercraft! So what I want is a tool that takes the power, flexibility and convenience of reStructuredText and allows me to generate pan, rotate and zoom presentations with impress.js, without having to manually reposition each slide if I reorganize a little bit of the presentation. I couldn't find one, so I made Hovercraft. Hovercraft’s power comes from the combination of reStructuredText’s convenience with the cool of impress.js, together with a flexible and powerful solution to position the slides. There are four ways to position slides: 1. Absolute positioning: You simply add X and Y coordinates to a slide, in pixels. Doing only this will not be fun, but someone might need it. 2. Relative positioning to last slide: By specifying x and/or y with with a starting r,you specify the distance from the previous slide. By using this form of positioning you can insert a slide, and the other slides will just move to make space for the new slide. 3. Relative positiong to any slide: You can reference any previous slide by its id and specify the position relative to it. This will work for all positioning fields. However, you should not use r as a slide id since the positioning might not behave as you expect. 4. Automatically: If you don’t specify any position the slide will have the same settings as the previous slide. With a relative positioning, this means the slide will move as long as the previous slide moved. This defaults to moving 1600px to the right, which means that if you supply no positions at all anywhere in the presentation, you get the standard slide-to-the-left presentation. 5. With an SVG path: In this last way of positioning, you can take an SVG path from an SVG document and stick it into the presentation, and that slide + all slides following that has no explicit positioning will be positioned on that path. This can be a bit fiddly to use, but can create awesome results, such as positioning the slides as snaking Python or similar. Hovercraft! also includes impress-console, a presenter console that will show you your notes, slide previews and the time, essential tools for any presentation. A help popup appears upon launching a presentation; it shows the following shortcuts. · Space -> Forward · Left, Down, Page Down -> Next slide · Right, Up, Page Up -> Previous slide · P -> Open presenter console · H -> Toggle the help popup #### USINGHOVERCRAFT! You can either use Hovercraft! to generate the presentation as HTML in a target directory, or you can let Hovercraft! serve the presentation from its builtin webserver. The latter have several benefits. One is that most webbrowsers will be very reluctant to open popup-windows from pages served from the file system. This is a security measure which can be changed, but it's easier to just point the browser to http://localhost:8000 The second benefit is that Hovercraft! will monitor the source files for the presentation, and if they are modified Hovercraft! will generate the presentation again automatically. That way you don't have to run Hovercraft! everytime you save a file, you only need to refresh the browser. Parameters hovercraft [-h] [-t TEMPLATE] [-c CSS] [-j JS] [-a] [-s] [-n] [-p PORT] <presentation> [<targetdir>] Positional arguments: <presentation> The path to the reStructuredText presentation file. <targetdir> The directory where the presentation is saved. Will be created if it does not exist. If you do not specify a targetdir Hovercraft! will instead start a webserver and serve the presentation from that server. Optional arguments: -h, --help Show this help. -t TEMPLATE, --template TEMPLATE Specify a template. Must be a .cfg file, or a directory with a template.cfg file. If not given it will use a default template. -c CSS, --css CSS An additional CSS file for the presentation to use. See also the :css: settings of the presentation. -j JS, --js JS An additional Javascript file for the presentation to use. Added as a js-body script. See also the :js-body: settings of the presentation. -a, --auto-console Open the presenter console automatically. This is useful when you are rehearsing and making sure the presenter notes are correct. You can also set this by having :auto-console: true first in the presentation. -s, --skip-help Do not show the initial help popup. You can also set this by having :skip-help: true first in the presentation. -n, --skip-notes Do not include presenter notes in the output. -N, --slide-numbers Show the current slide number on the slide itself and in the presenter console. You can also set this by having slide-numbers: true in the presentation preamble. -p PORT, --port PORT The address and port that the server uses. Ex 8080 or 127.0.0.1:9000. Defaults to 0.0.0.0:8000. Built in templates There are two templates that come with Hovercraft! One is called default and will be used unless you specify a template. This is the template you will use most of the time. The second is called simple and it doesn't have a presenter console. This template is especially useful if you combine it with the --skip-notes parameter to prepare a version of your presentation to be put online. #### MAKINGPRESENTATIONS A note on terminology Traditionally a presentation is made up of slides. Calling them "slides" is not really relevant in an impress.js context, as they can overlap and doesn't necessarily slide. The name "steps" is better, but it's also more ambiguous. Hence impress.js uses the terms "slide" and "step" as meaning the same thing, and so does Hovercraft! Hovercraft! syntax Presentations are reStructuredText files. If you are reading this documentation from the source code, then you are looking at a reStructuredText document already. It's fairly simple, you underline headings to mark them as headings: This becomes a h1 ================= And this a h2 ------------- The different ways of underlining them doesn't mean anything, instead the order of them is relevant, so the first type of underline encountered in the file will make a level 1 heading, the second type a level 2 heading and so on. In this file = is used for level 1, and - for level 2. You can also mark text as italic or bold, with *single asterixes* or **double asterixes** respectively. You can also have bullet lists: * Bullet 1 * Bullet 1.1 * Bullet 2 * Bullet 3 And numbered lists: 1. Item 1 1.1. Item 1.1 2. Item 2 3. Item 3 You can include images: .. image:: path/to/image.png :height: 600px :width: 800px As you see you can also specify height and width and loads of other parameters, but they are all optional. And you can mark text as being preformatted. You do that by ending the previous row with double colons, or have a row of double colons by itself: :: This code here will be preformatted and shown with a monospaced font and all spaces preserved. If you want to add source code, you can use the code directive, and get syntax highlighting: .. code:: python def some_example_code(foo): return foo * foo The syntax highlighting is done by Pygments and supports lots and lots of languages. You are also likely to want to put a title on the presentation. You do that by having a .. title:: statement before the first slide: .. title:: This is the presentation title That is the most important things you'll need to know about reStructuredText for making presentations. There is a lot more to know, and a lot of advanced features like links, footnotes, and more. It is in fact advanced enough so you can write a whole book in it, but for all that you need to read the documentation. If you add a math directive then hovercraft! will add a link to the MathJax CDN so that this: .. math:: e^{i \pi} + 1 = 0 will be rendered by the MathJax javascript library. The math directive can also be used as a "role" with the equations inlined with the text flow. Note that if you use the math statement, by default the MathJax library will be loaded from the internet, meaning that your presentation will need network connectivity to work, which can be a problem when presenting and conferences, which often have bad network connectivity. This can be solved by specifying a local copy of mathjax with the --mathjax command line. Presenter notes To add presenter notes, that will be displayed in the presenter console, use the following syntax: .. note:: Here goes the presenter note. External files Any image file referenced in the presentation by a relative path will be copied to the target directory, keeping it's relative path to the presentation. The same goes for images or fonts referenced in any CSS files used by the presentation or the template. Images or fonts referenced by absolute paths or URI's will not be copied. The css that is included by the default template are three files. · impressConsole.css contains the CSS needed for the presenter console to work, · highlight.css contains a default style for code syntax highlighting, as that otherwise would be a lot of work. If you don't like the default colors or styles in the highlighting, this is the file you should copy and modify. · hovercraft.css, which only includes the bare minimum: It hides the impress.js fallback message, the presenter notes, and sets up a useful default of having a step width be 1000 pixels wide. For this reason you want to include your own CSS to style your slides. To include a CSS file you add a :css:-field at the top of the presentation: :css: css/presentation.css You can also optionally specify that the css should be only valid for certain CSS media: :css-screen,projection: css/presentation.css :css-print: css/print.css You can specify any number of css files in this way. You can also add one extra CSS-file via a command-line parameter: hovercraft --css=my_extra.css presentationfile.rst outdir/ Styling the console You can also optionally add styles to your slides that are only shown in when the slide is shown in the presenter console: :css-preview: css/slidepreview.css You can also style the presenter console itself: css-console css/console.css That css file needs to be based on the impressConsole.css used by the default template, as it replaces that file. In a similar fashion you can add Javascript files to either header or body: :js-body: js/secondjsfile.js You can also add one extra Javascript-file via a command-line parameter: hovercraft --js=my_extra.js presentationfile.rst outdir/ If you want static content, content that doesn't move with each slide; for example a header, footer, your company logo or a slide background pattern, then you can insert that content with the header and footer commands: .. image:: images/company-logo.png .. footer:: "How to use Hovercraft", Yern Busfern, ImaginaryCon 2017 The header will be located in the resulting HTML before the first slide and the footer will be located after the last slide. However, they will be displayed statically on every slide, and you will have to position them with CSS. By default the header will be displayed behind the slides and the footer in front of the slides, so the header is useful for background designs and the footer for designs that should be in the foreground. It doesn't matter where in the presentation you add these commands, I would recommend that you add them before the first slide. Styling a specific slide If you want to have specific styling for a specific slide, it is a good idea to give that slide a unique ID: :id: the-slide-id You can then style that slide specifically with: div#the-slide-id { /* Custom CSS here */ } If you don't give it a specific ID, it will get an ID based on its sequence number. And that means the slide's ID will change if you insert or remove slides that came before it, and in that case your custom stylings of that slide will stop working. Adding a custom class to slides If you want to apply the same style to one or more slides you may prefer adding a class to those slides instead (or in addition to) a unique ID: :id: my-custom-class You can then style those slides by adding CSS rules with: .my-custom-class { /* Custom CSS here */ } Adding a custom directive If you want to use a custom docutils directive, you'll want to run hovercraft in the same process where you register your directive. For example, you can create a custom startup script like the following: from docutils import nodes from docutils.parsers.rst import Directive, directives import hovercraft class HelloWorld(Directive): def run(self): para = nodes.paragraph(text='Hello World') return [para] directives.register_directive('hello-world', HelloWorld) if __name__ == "__main__": cmd = ['--skip-help', 'slides.rst'] hovercraft.main(cmd) While creating your own directive might be daunting, it's possible to reuse useful directives from other projects. For example, you can reuse Pelican's custom code block, which adds an hl_lines option to highlight specific lines of code. To use that directive, simply add the following import to the above script: import pelican.rstdirectives Portable presentations Since Hovercraft! generates HTML5 presentations, you can use any computer that has a modern browser installed to view or show the presentation. This allows you both to put up the presentation online and to use a borrowed computer for your conference or customer presentation. When you travel you don't know what equipment you have to use when you show your presentaton, and it's surprisingly common to encounter a projector that refuses to talk to your computer. It is also very easy to forget your dongle if you have a MacBook, and there have even been cases of computers going completely black and dead when you connect them to a projector, even though all other computers seem to work fine. The main way of making sure your presentation is portable is to try it on different browsers and different computers. But the latter can be unfeasible, not everyone has both Windows, Linux and OS X computers at home. To help make your presentations portable it is a good idea to define your own @font-face's and use them, so you are sure that the target browser will use the same fonts as you do. Hovercraft! will automatically find @font-face definitions and copy the font files to the target directory. impress.js fields The documentation on impress.js is contained as comments in the demo html file. It is not always very clear, so here comes a short summary for convenience. The different data fields that impress.js will use in 0.5.3, which is the current version, are the following: · data-transition-duration: The time it will take to move from one slide to another. Defaults to 1000 (1 second). This is only valid on the presentation as a whole. · data-perspective: Controls the "perspective" in the 3d effects. It defaults to 500. Setting it to 0 disables 3D effects. · data-x: The horizontal position of a slide in pixels. Can be negative. · data-y: The vertical position of a slide in pixels. Can be negative. · data-scale: Sets the scale of a slide, which is what creates the zoom. Defaults to 1. A value of 4 means the slide is four times larger. In short: Lower means zooming in, higher means zooming out. · data-rotate-z: The rotation of a slide in the x-axis, in degrees. This will cause the slide to be rotated clockwise or counter-clockwise. · data-rotate: The same as data-rotate-z. · data-rotate-x: The rotation of a slide in the x-axis, in degrees. This means you are moving the slide in a third dimension compared with other slides. This is generally cool effect, if used right. · data-rotate-y: The rotation of a slide in the x-axis, in degrees. · data-z: This controls the position of the slide on the z-axis. Setting this value to -3000 means it's positioned -3000 pixels away. This is only useful when you use data-rotate-x or data-rotate-y, otherwise it will only give the impression that the slide is made smaller, which isn't really useful. Hovercraft! specialities Hovercraft! has some specific ways it uses reStructuredText. First of all, the reStructuredText "transition" is used to mark the separation between different slides or steps. A transition is simply a line with four or more dashes: ---- You don't have to use dashes, you can use any of the characters used to underline headings, = - : . ' " ~ ^ _ * + #. And just as width headings, using different characters indicates different "levels". In this way you can make a hierarchical presentation, with steps and substeps. However, impress.js does not support that, so this is only useful if you make your own templates that uses another Javascript library, for example Reveal.js. If you have more than one transition level with the templates included with Hovercraft, the resulting presentation may behave strangely. All reStructuredText fields are converted into attributes on the current tag. Most of these will typically be ignored by the rendering to HTML, but there are two places where the tags will make a difference, and that is by putting them first in the document, or first on a slide. Any fields you put first in a document will be rendered into attributes on the main impress.js <div>. The only ones that Hovercraft! will use are data-transition-duration, skip-help, auto-console and slide-numbers. Any fields you put first in a slide will be rendered into attributes on the slide <div>. This is used primarily to set the position/zoom/rotation of the slide, either with the data-x, data-y and other impress.js settings, or the hovercraft-path setting, more on that later. Hovercraft! will start making the first slide when it first encounters either a transition or a header. Everything that comes before that will belong to the presentation as a whole. A presentation can therefore look something like this: :data-transition-duration: 2000 :skip-help: true .. title: Presentation Title ---- This is the first slide ======================= Here comes some text. ---- :data-x: 300 :data-y: 2000 This is the second slide ======================== #. Here we have #. A numbered list #. It will get correct #. Numbers automatically Relative positioning Hovercraft! gives you the ability to position slides relative to each other. You do this by starting the coordinates with "r". This will position the slide 500 pixels to the right and a thousand pixels above the previous slide: :data-x: r500 :data-y: r-1000 Relative paths allow you to insert and remove slides and have other slides adjust automatically. It's generally the most useful way of positioning. Automatic positioning If you don't specify an attribute, the slide settings will be the same as the previous slide. This means that if you used relative positioning, the next slide will move the same distance. This gives a linear movement, and your slides will end up in a straight line. By default the movement is 1600 pixels to the right, which means that if you don't position any slides at all, you get a standard presentation where the slides will simply slide from right to left. SVG Paths Hovercraft! supports positioning slides along an SVG path. This is handy, as you can create a drawing in a software that supports SVG, and then copy-paste that drawings path You specify the SVG path with the :hovercraft-path: field. For example: :hovercraft-path: m275,175 v-150 a150,150 0 0,0 -150,150 z Every following slide that does not have any explicit positioning will be placed on this path. There are some things you need to be careful about when using SVG paths. Relative and absolute coordinates SVG coordinates can either be absolute, with a reference to the page origin; or relative, which is in reference to the last point. Hovercraft! can handle both, but what it can not handle very well is a mixture of them. Specifically, if you take an SVG path that starts with a relative movement and extract that from the SVG document, you will lose the context. All coordinates later must then also be relative. If you have an absolute coordinate you then suddenly regain the context, and everything after the first absolute coordinate will be misplaced compared to the points that come before. Most notable, the open source software "Inkscape" will mix absolute and relative coordinates, if you allow it to use relative coordinates. You therefore need to go into it's settings and uncheck the checkbox that allows you to use relative coordinates. This forces Inkscape to save all coordinates as absolute, which will work fine. Start position By default the start position of the path, and hence the start position of the first slide, will be whatever the start position would have been if the slide had no positioning at all. If you want to change this position then just include :data-x: or :data-y: fields. Both relative and absolute positioning will work here. In all cases, the first m or M command of the SVG path is effectively ignored, but you have to include it anyway. SVG transforms SVG allows you to draw up path and then transform it. Hovercraft! has no support for these transforms, so before you extract the path you should make sure the SVG software doesn't use transforms. In Inkscape you can do this by the "Simplify" command. Other SVG shapes Hovercraft! doesn't support other SVG shapes, just the path. This is because organising slides in squares, etc, is quite simple anyway, and the shapes can be made into paths. Usually in the software you will have to select the shape and tell your software to make it into a path. In Inkscape, transforming an object into a path will generally mean that the whole path is made of CubicBezier curves, which are unnecessarily complex. Using the "Simplify" command in Inkscape is usually enough to make the shapes into paths. Shape-scaling Hovercraft! will scale the path so that all the slides that need to fit into the path will fit into the path. If you therefore have several paths in your presentation, they will not keep their relative sizes, but will be resized so the slides fit. If you need to have the shapes keep their relative sizes, you need to combine them into one path. Examples To see how to use Hovercraft! in practice, there are three example presentations included with Hovercraft! hovercraft.rst The demo presentation you can see at http://regebro.github.com/hovercraft tutorial.rst A step by step guide to the features of Hovercraft! positions.rst An explanation of how to use the positioning features. #### DESIGNINGYOURPRESENTATIONS There are several tricks to making presentations. I certainly do not claim to be an expert, but here are some beginners hints. Take it easy Don't go too heavy on the zoom. Having a difference between two slides in scale of more than 5 is rarely going to look good. It would make for a nice cool zooming effect if it did, but this is not what browsers are designed for, so it won't. And the 3D effects can be really cool if used well. But not all the time, it gets tiring for the audience. Try, if you can, to use the zoom and 3D effects when they make sense in the presentation. You can for example mention the main topics on one slide, and then zoom in on each topic when you discuss it in more detail. That way the effects help clarify the presentation, rather than distract from it. Custom fonts Browsers tend to render things subtly differently. They also have different default fonts, and different operating systems have different implementations of the same fonts. So to make sure you have as much control over the design as possible, you should always include fonts with the presentation. A good source for free fonts are Google Webfonts. Those fonts are free and open source, so you can use them with no cost and no risk of being sued. They can also be downloaded or included online. If you are making a presentation that is going to run on your computer at a conference or customer meeting, always download the fonts and have them as a part of the presentation. Put them in a folder called fonts under the folder where your presentation is. You also need to define the font-family in your CSS. Font Squirrel's webfont generator will provide you with a platform-independent toolkit for generating both the varius font formats and the CSS. If the presentation is online only, you can put an @include-statement in your CSS to include Googles webfonts directly: But don't use this for things you need to show on your computer, as it requires you to have internet access. Test with different browsers If you are putting the presentation online, test it with a couple of major browsers, to make sure nothing breaks and that everything still looks good. Not only are there subtle differences in how things may get positioned, different browsers are also good at different things. I've tested some browsers, all on Ubuntu, and it is likely that they behave differently on other operating systems, so you have to try for yourself. Firefox Firefox 18 is quite slow to use with impress.js, especially for 3D stuff, so it can have very jerky movements from slide to slide. It does keep text looking good no matter how much you zoom. On the other hand, it refuses to scale text infinitely, so if you scale too much characters will not grow larger, they will instead start moving around. Firefox 19 is better, but for 3D stuff it's still a bit slow. Chrome Chrome 24 is fast, but will not redraw text in different sizes, but will instead create an image of them and rescale them, resulting in the previous slide having a fuzzy pixelated effect. Epiphany Epiphany 3.4.1 is comparable to Firefox 19, possibly a bit smoother, and the text looks good. But it has bugs in how it handles 3D data, and the location bar is visible in fullscreen mode, making it less suitable for any sort of presentation. #### TEMPLATES Luckily, for most cases you don't need to create your own template, as the default template is very simple and most things you need to do is doable with css. However, I don't want Hovercraft! to set up a wall where it isn't flexible enough for your needs, so I added support to make your own templates. You need to create your own template if you are unsatisfied with the HTML that Hovercraft! generates, for example if you need to use another version of HTML or if the reStructuredText you are using isn't being rendered in a way that is useful for you. Although if you aren't happy with the HTML generated from the reStructuredText that could very well be a bug, so open an issue on Github for discussion. Hovercraft! generates presentations by converting the reStructuredText into XML and then using XSLT to translate the XML into HTML. Templates are directories with a configuration file, a template XSL file, and any number of CSS, JS and other resource files. The template configuration file The configuration file is normally called template.cfg, but if you have several configuration files in one template directory, you can specify which one to use by specifying the full path to the configuration file. However, if you just specify the template directory, template.cfg will be used. Template files are in configparser format, which is an extended ini-style format. They are very simple, and have only one section, [hovercraft]. Any other sections will be ignored. Many of the parameters are lists that often do not fit on one line. In that case you can split the line up over several lines, but indenting the lines. The amount of indentation doesn't make any difference, except aesthetically. The parameters in the [hovercraft] section are: · template The name of the xsl template. · css A list of CSS filenames separated by whitespace. These files will get included in the final file with "all" as the media specification. · css-<media> A list of CSS filenames separated by whitespace. These files will get included in the final file with the media given in the parameter. So the files listed for the parameter "css-print" will get "print" as their media specification and a key like "css-screen,print" will return media "screen,print". · js-header A list of filenames separated by whitespace. These files will get included in the target file as header script links. · js-body A list of filenames separated by whitespace. These files will get included in the target file as script links at the end of the file. The files impress.js, impressConsole.js and hovercraft.js typically need to be included here. · resources A list of filenames separated by whitespace that will be copied to the target directory, but nothing else is done with them. Images and fonts used by CSS will be copied anyway, but other resources may be added here. · resource-directories A list of directory names separated by whitespace. These will be treated like resources above, ie only copied to the target directory. The directory contents will be copied recursively, but hidden files (like files starting with a . are ignored. An example: [hovercraft] template = template.xsl css = css/screen.css css/impressConsole.css css-print = css/print.css js-body = js/impress.js js/impressConsole.js js/hovercraft.js resources = images/back.png images/forward.png images/up.png images/down.png The template file The file specified with the template parameters is the actual XSLT template that will perform the translation from XML to HTML. Most of the time you can just copy the default template file in hovercraft/templates/default/template.xsl and modify it. XSLT is very complex, but modifying the templates HTML is quite straightforward as long as you don't have to touch any of the <xsl:...> tags. Also, the HTML that is generated is XHTML compatible and quite straightforward, so for the most case all you would need to generate another version of HTML, for example strict XHTML, would be to change the doctype. But if you need to add or change the main generated HTML you can add and change HTML statements in this main file as you like. See for example how the little help-popup is added to the bottom of the HTML. If you want to change the way the reStructuredText is rendered things get slightly more complex. The XSLT rules that convert the reStructuredText XML into HTML are contained in a separate file, reST.xsl. For the most part you can just include it in the template file with the following code: <xsl:import href="resource:templates/reST.xsl" /> The resource: part here is not a part of XSLT, but a part of Hovercraft! It tells the XSLT translation that the file specified should not be looked up on the file system, but as a Python package resource. Currently the templates/reST.xsl file is the only XSLT resource import available. If you need to change the way reStructuredText is rendered you need to make a copy of that file and modify it. You then need to make a copy of the main template and change the reference in it to your modified XSLT file. None of the XSLT files need to be copied to the target, and should not be listed as a resource in the template configuration file. #### AUTHOR Lennart Regebro 2013-2019, Lennart Regebro `
2019-10-23 08:41:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35741090774536133, "perplexity": 3082.8376791149685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987829507.97/warc/CC-MAIN-20191023071040-20191023094540-00207.warc.gz"}
https://www.gradesaver.com/textbooks/math/applied-mathematics/elementary-technical-mathematics/chapter-16-test-page-567/4
## Elementary Technical Mathematics $111111$ Align the addends at the decimal point and sum the digits with the same place value. In binary notation, the allowable digits are 0 and 1. In standard Base 10 notation, when the sum of one of the columns is 10 or greater, you write the ones digit as the sum for that column and carry the tens digit to the next column. In binary (or Base 2) addition, the same process is used when the sum of one of the columns is 2 or greater. In binary notation, 1+1+1=11, so you would write 1 as the sum for the column and carry the 1 to the next column. $\ \ \ \ \ \overset{1}1\,\overset{1}1\,\overset{1}0\,1\,1$ $\ \ \ \ \,1\,0\,1\,1\,0$ $\underline{\ \ \ \ \ \ \ \ 1\,1\,1\,0}$ $\ \ 1\,1\ 1\,1\ 1\ 1$
2019-06-26 13:49:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6430779099464417, "perplexity": 243.73131208323528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000353.82/warc/CC-MAIN-20190626134339-20190626160339-00355.warc.gz"}
https://proofwiki.org/wiki/Definition:Orthogonal_Projection
# Definition:Orthogonal Projection ## Definition Let $H$ be a Hilbert space. Let $K$ be a closed linear subspace of $H$. Then the orthogonal projection on $K$ is the map $P_K: H \to H$ defined by $k = P_K(h) \iff k \in K$ and $d \left({h, k}\right) = d \left({h, K}\right)$ where the latter $d$ signifies distance to a set. That $P_K$ is well-defined follows from Unique Point of Minimal Distance. The name orthogonal projection stems from the fact that $\left({h - P_K \left({h}\right)}\right) \perp K$. This and other properties of $P_K$ are collected in Properties of Orthogonal Projection.
2015-07-04 01:39:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9774115085601807, "perplexity": 178.53326890790467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096293.72/warc/CC-MAIN-20150627031816-00202-ip-10-179-60-89.ec2.internal.warc.gz"}
https://en.m.wikipedia.org/wiki/Cross_section_(physics)
# Cross section (physics) When two particles interact, their mutual cross section is the area transverse to their relative motion within which they must meet in order to scatter from each other. If the particles are hard inelastic spheres that interact only upon contact, their scattering cross section is related to their geometric size. If the particles interact through some action-at-a-distance force, such as electromagnetism or gravity, their scattering cross section is generally larger than their geometric size. When a cross section is specified as a function of some final-state variable, such as particle angle or energy, it is called a differential cross section. When a cross section is integrated over all scattering angles (and possibly other variables), it is called a total cross section. Cross sections are typically denoted σ (sigma) and measured in units of area. Scattering cross sections may be defined in nuclear, atomic, and particle physics for collisions of accelerated beams of one type of particle with targets (either stationary or moving) of a second type of particle. The probability for any given reaction to occur is in proportion to its cross section. Thus, specifying the cross section for a given reaction is a proxy for stating the probability that a given scattering process will occur. The measured reaction rate of a given process depends strongly on experimental variables such as the density of the target material, the intensity of the beam, the detection efficiency of the apparatus, or the angle setting of the detection apparatus. However, these quantities can be factored away, allowing measurement of the underlying two-particle collisional cross section. Differential and total scattering cross sections are among the most important measurable quantities in nuclear, atomic, and particle physics. ## Collision among gas particlesEdit Figure 1. In a gas of particles of individual diameter 2r, the cross section σ, for collisions is related to the particle number density n, and mean free path between collisions λ. In a gas of finite-sized particles there are collisions among particles that depend on their cross-sectional size. The average distance that a particle travels between collisions depends on the density of gas particles. These quantities are related by ${\displaystyle \sigma ={\frac {1}{n\lambda }},}$ where σ is the cross section of a two-particle collision (SI units: m2), λ is the mean free path between collisions (SI units: m), n is the number density of the target particles (SI units: m−3). If the particles in the gas can be treated as hard spheres of radius r that interact by direct contact, as illustrated in Figure 1, then the effective cross section for the collision of a pair is ${\displaystyle \sigma =\pi \left(2r\right)^{2}}$ If the particles in the gas interact by a force with a larger range than their physical size, then the cross section is a larger effective area that may depend on a variety of variables such as the energy of the particles. Cross sections can be computed for atomic collisions but also are used in the subatomic realm. For example, in nuclear physics a "gas" of low-energy neutrons collides with nuclei in a reactor or other nuclear device, with a cross section that is energy-dependent and hence also with well-defined mean free path between collisions. ## Attenuation of a beam of particlesEdit If a beam of particles enters a thin layer of material of thickness dz, the flux Φ of the beam will decrease by dΦ according to ${\displaystyle {\frac {\mathrm {d} \Phi }{\mathrm {d} z}}=-n\sigma \Phi ,}$ where σ is the total cross section of all events, including scattering, absorption, or transformation to another species. The number density of scattering centers is designated by n. Solving this equation exhibits the exponential attenuation of the beam intensity: ${\displaystyle \Phi =\Phi _{0}e^{-n\sigma z},}$ where Φ0 is the initial flux, and z is the total thickness of the material. For light, this is called the Beer–Lambert law. ## Differential cross sectionEdit Consider a classical measurement where a single particle is scattered off a single stationary target particle. Conventionally, a Spherical coordinate system is used, with the target placed at the origin and the z axis of this coordinate system aligned with the incident beam. The angle θ is the scattering angle, measured between the incident beam and the scattered beam, and the φ is the azimuthal angle. The impact parameter b is the perpendicular offset of the trajectory of the incoming particle, and the outgoing particle emerges at an angle θ. For a given interaction (Coulombic, magnetic, gravitational, contact, etc.), the impact parameter and the scattering angle have a definite one-to-one functional dependence on each other. Generally the impact parameter can neither be controlled nor measured from event to event and is assumed to take all possible values when averaging over many scattering events. The differential size of the cross section is the area element in the plane of the impact parameter, i.e. dσ = b dφ db. The differential angular range of the scattered particle at angle θ is the solid angle element dΩ = sin θ dθ dφ. The differential cross section is the quotient of these quantities, dσ/dΩ. It is a function of the scattering angle (and therefore also the impact parameter), plus other observables such as the momentum of the incoming particle. The differential cross section is always taken to be positive, even though larger impact parameters generally produce less deflection. In cylindrically symmetric situations (about the beam axis), the azimuthal angle φ is not changed by the scattering process, and the differential cross section can be written as ${\displaystyle {\frac {\mathrm {d} \sigma }{\mathrm {d} (\cos \theta )}}={\frac {1}{2\pi }}\int _{0}^{2\pi }{\frac {\mathrm {d} \sigma }{\mathrm {d} \Omega }}\,\mathrm {d} \varphi }$ . In situations where the scattering process is not azimuthally symmetric, such as when the beam or target particles possess magnetic moments oriented perpendicular to the beam axis, the differential cross section must also be expressed as a function of the azimuthal angle. For scattering of particles of incident flux Finc off a stationary target consisting of many particles, the differential cross section dσ/dΩ at an angle (θ,φ) is related to the flux of scattered particle detection Fout(θ,φ) in particles per unit time by ${\displaystyle {\frac {\mathrm {d} \sigma }{\mathrm {d} \Omega }}(\theta ,\varphi )={\frac {1}{nt\Delta \Omega }}{\frac {F_{\text{out}}(\theta ,\varphi )}{F_{\text{inc}}}}.}$ Here ΔΩ is the finite angular size of the detector (SI unit: sr), n is the number density of the target particles (SI units: m−3), and t is the thickness of the stationary target (SI units: m). This formula assumes that the target is thin enough that each beam particle will interact with at most one target particle. The total cross section σ may be recovered by integrating the differential cross section dσ/dΩ over the full solid angle ( steradians): ${\displaystyle \sigma =\oint _{4\pi }{\frac {\mathrm {d} \sigma }{\mathrm {d} \Omega }}\,\mathrm {d} \Omega =\int _{0}^{2\pi }\int _{0}^{\pi }{\frac {\mathrm {d} \sigma }{\mathrm {d} \Omega }}\sin \theta \,\mathrm {d} \theta \,\mathrm {d} \varphi .}$ It is common to omit the “differential” qualifier when the type of cross section can be inferred from context. In this case, σ may be referred to as the integral cross section or total cross section. The latter term may be confusing in contexts where multiple events are involved, since “total” can also refer to the sum of cross sections over all events. The differential cross section is extremely useful quantity in many fields of physics, as measuring it can reveal a great amount of information about the internal structure of the target particles. For example, the differential cross section of Rutherford scattering provided strong evidence for the existence of the atomic nucleus. Instead of the solid angle, the momentum transfer may be used as the independent variable of differential cross sections. Differential cross sections in inelastic scattering contain resonance peaks that indicate the creation of metastable states and contain information about their energy and lifetime. ## Quantum scatteringEdit In the time-independent formalism of quantum scattering, the initial wave function (before scattering) is taken to be a plane wave with definite momentum k: ${\displaystyle \phi _{-}(\mathbf {r} )\;{\stackrel {r\to \infty }{\longrightarrow }}\;e^{ikz},}$ where z and r are the relative coordinates between the projectile and the target. The arrow indicates that this only describes the asymptotic behavior of the wave function when the projectile and target are too far apart for the interaction to have any effect. After the scattering takes place, it is expected that the wave function takes on the following asymptotic form: ${\displaystyle \phi _{+}(\mathbf {r} )\;{\stackrel {r\to \infty }{\longrightarrow }}\;f(\theta ,\phi ){\frac {e^{ikr}}{r}},}$ where f is some function of the angular coordinates known as the scattering amplitude. This general form is valid for any short-ranged, energy-conserving interaction. It is not true for long-ranged interactions, so there are additional complications when dealing with electromagnetic interactions. The full wave function of the system behaves asymptotically as the sum ${\displaystyle \phi (\mathbf {r} )\;{\stackrel {r\to \infty }{\longrightarrow }}\;\phi _{-}(\mathbf {r} )+\phi _{+}(\mathbf {r} ).}$ The differential cross section is related to the scattering amplitude: ${\displaystyle {\frac {\mathrm {d} \sigma }{\mathrm {d} \Omega }}(\theta ,\phi )={\bigl |}f(\theta ,\phi ){\bigr |}^{2}.}$ This has the simple interpretation as the probability density for finding the scattered projectile at a given angle. A cross section is therefore a measure of the effective surface area seen by the impinging particles, and as such is expressed in units of area. The cross section of two particles (i.e. observed when the two particles are colliding with each other) is a measure of the interaction event between the two particles. The cross section is proportional to the probability that an interaction will occur; for example in a simple scattering experiment the number of particles scattered per unit of time (current of scattered particles Ir) depends only on the number of incident particles per unit of time (current of incident particles Ii), the characteristics of target (for example the number of particles per unit of surface N), and the type of interaction. For ≪ 1 we have {\displaystyle {\begin{aligned}I_{\text{r}}&=I_{\text{i}}N\sigma ,\\\sigma &={\frac {I_{\text{r}}}{I_{\text{i}}}}{\frac {1}{N}}\\&={\text{probability of interaction}}\times {\frac {1}{N}}.\end{aligned}}} ### Relation to the S-matrixEdit If the reduced masses and momenta of the colliding system are mi, pi and mf, pf before and after the collision respectively, the differential cross section is given by[clarification needed] ${\displaystyle {\frac {\mathrm {d} \sigma }{\mathrm {d} \Omega }}=\left(2\pi \right)^{4}m_{i}m_{f}{\frac {p_{f}}{p_{i}}}{\bigl |}T_{fi}{\bigr |}^{2},}$ where the on-shell T matrix is defined by ${\displaystyle S_{fi}=\delta _{fi}-2\pi i\delta \left(E_{f}-E_{i}\right)\delta \left(\mathbf {p} _{i}-\mathbf {p} _{f}\right)T_{fi}}$ in terms of the S-matrix. Here δ is the Dirac delta function. The computation of the S-matrix is the main goal of the scattering theory. ## UnitsEdit Although the SI unit of total cross sections is m2, smaller units are usually used in practice. In nuclear and particle physics, the conventional unit is the barn b, where 1 b = 10−28 m2 = 100 fm2.[1] Smaller prefixed units such as mb and μb are also widely used. Correspondingly, the differential cross section can be measured in units such as mb/sr. When the scattered radiation is visible light, it is conventional to measure the path length in centimetres. To avoid the need for conversion factors, the scattering cross section is expressed in cm2, and the number concentration in cm−3. The measurement of the scattering of visible light is known as nephelometry, and is effective for particles of 2–50 µm in diameter: as such, it is widely used in meteorology and in the measurement of atmospheric pollution. The scattering of X-rays can also be described in terms of scattering cross sections, in which case the square ångström is a convenient unit: 1 Å2 = 10−20 m2 = 10000 pm2 = 108 b. ## Scattering of lightEdit For light, as in other settings, the scattering cross section is generally different from the geometrical cross section of a particle, and it depends upon the wavelength of light and the permittivity, shape and size of the particle. The total amount of scattering in a sparse medium is proportional to the product of the scattering cross section and the number of particles present. In terms of area, the total cross section (σ) is the sum of the cross sections due to absorption, scattering and luminescence[clarification needed]. The sum of the absorption and scattering cross sections is sometimes referred to as the extinction cross section. ${\displaystyle \sigma =\sigma _{\text{a}}+\sigma _{\text{s}}+\sigma _{\text{l}}.}$ The total cross section is related to the absorbance of the light intensity through the Beer–Lambert law, which says that absorbance is proportional to particle concentration: ${\displaystyle A_{\lambda }=Cl\sigma ,}$ where Aλ is the absorbance at a given wavelength λ, C is the particle concentration as a number density, and l is the path length. The absorbance of the radiation is the logarithm (decadic or, more usually, natural) of the reciprocal of the transmittance T:[2] ${\displaystyle A_{\lambda }=-\log {\mathcal {T}}.}$ ## Scattering of light on extended bodiesEdit In the context of scattering light on extended bodies, the scattering cross section, σscat, describes the likelihood of light being scattered by a macroscopic particle. In general, the scattering cross section is different from the geometrical cross section of a particle, as it depends upon the wavelength of light and the permittivity in addition to the shape and size of the particle. The total amount of scattering in a sparse medium is determined by the product of the scattering cross section and the number of particles present. In terms of area, the total cross section (σ) is the sum of the cross sections due to absorption, scattering and luminescence: ${\displaystyle \sigma =\sigma _{\text{a}}+\sigma _{\text{s}}+\sigma _{\text{l}}.}$ The total cross section is related to the absorbance of the light intensity through the Beer–Lambert law, which says that absorbance is proportional to concentration: Aλ = Clσ, where Aλ is the absorbance at a given wavelength λ, C is the concentration as a number density, and l is the path length. The extinction or absorbance of the radiation is the logarithm (decadic or, more usually, natural) of the reciprocal of the transmittance T:[3] ${\displaystyle A_{\lambda }=-\log {\mathcal {T}}.}$ ### Relation to physical sizeEdit There is no simple relationship between the scattering cross section and the physical size of the particles, as the scattering cross section depends on the wavelength of radiation used. This can be seen when driving in foggy weather: the droplets of water (which form the fog) scatter red light less than they scatter the shorter wavelengths present in white light, and the red rear fog light can be distinguished more clearly than the white headlights of an approaching vehicle. That is to say that the scattering cross section of the water droplets is smaller for red light than for light of shorter wavelengths, even though the physical size of the particles is the same. ### Meteorological rangeEdit The scattering cross section is related to the meteorological range LV: ${\displaystyle L_{\text{V}}={\frac {3.9}{C\sigma _{\text{scat}}}}.}$ The quantity scat is sometimes denoted bscat, the scattering coefficient per unit length.[4] ## ExamplesEdit ### Example 1: elastic collision of two hard spheresEdit The elastic collision of two hard spheres is an instructive example that demonstrates the sense of calling this quantity a cross section. R and r are respectively the radii of the scattering center and scattered sphere. The total cross section is ${\displaystyle \sigma _{\text{tot}}=\pi \left(r+R\right)^{2}.}$ So in this case the total scattering cross section is equal to the area of the circle (with radius r + R) within which the center of mass of the incoming sphere has to arrive for it to be deflected, and outside which it passes by the stationary scattering center. ### Example 2: scattering light from a 2D circular mirrorEdit Another example illustrates the details of the calculation of a simple light scattering model obtained by a reduction of the dimension. For simplicity, we will consider the scattering of a beam of light on a plane treated as a uniform density of parallel rays and within the framework of geometrical optics from a circle with radius r with a perfectly reflecting boundary. Its three-dimensional equivalent is therefore the more difficult problem of a laser or flashlight light scattering from the mirror sphere, for example, from the mechanical bearing ball.[5] The unit of cross section in one dimension is the unit of length, for example 1 m. Let α be the angle between the light ray and the radius joining the reflection point of the light ray with the center point of the circle mirror. Then the increase of the length element perpendicular to the light beam is expressed by this angle as ${\displaystyle \mathrm {d} x=r\cos \alpha \,\mathrm {d} \alpha ,}$ the reflection angle of this ray with respect to the incoming ray is then 2α, and the scattering angle is ${\displaystyle \theta =\pi -2\alpha .}$ The energy or the number of photons reflected from the light beam with the intensity or density of photons I on the length dx is ${\displaystyle I\,\mathrm {d} \sigma =I\,\mathrm {d} x(x)=Ir\cos \alpha \,\mathrm {d} \alpha =I{\frac {r}{2}}\sin \left({\frac {\theta }{2}}\right)\,\mathrm {d} \theta =I{\frac {\mathrm {d} \sigma }{\mathrm {d} \theta }}\,\mathrm {d} \theta .}$ The differential cross section is therefore (dΩ = dθ) ${\displaystyle {\frac {\mathrm {d} \sigma }{\mathrm {d} \theta }}={\frac {r}{2}}\sin \left({\frac {\theta }{2}}\right).}$ As it is seen from the behaviour of the sine function, this quantity has the maximum for the backward scattering (θ = π; the light is reflected perpendicularly and returns), and the zero minimum for the scattering from the edge of the circle directly forward (θ = 0). It confirms the intuitive expectations that the mirror circle acts like a diverging lens, and a thin beam is more diluted the closer it is from the edge defined with respect to the incoming direction. The total cross section can be obtained by summing (integrating) the differential section of the entire range of angles: ${\displaystyle \sigma =\int _{0}^{2\pi }{\frac {\mathrm {d} \sigma }{\mathrm {d} \theta }}\,\mathrm {d} \theta =\int _{0}^{2\pi }{\frac {r}{2}}\sin \left({\frac {\theta }{2}}\right)\,\mathrm {d} \theta =\left.-r\cos \left({\frac {\theta }{2}}\right)\right|_{0}^{2\pi }=2r,}$ so it is equal as much as the circular mirror is totally screening the two-dimensional space for the beam of light. In three dimensions for the mirror ball with the radius r it is therefore equal σ = πr2. ### Example 3: scattering light from a 3D spherical mirrorEdit We can now use the result from the Example 2 to calculate the differential cross section for the light scattering from the perfectly reflecting sphere in three dimensions. Let us denote now the radius of the sphere as a. Let us parametrize the plane perpendicular to the incoming light beam by the cylindrical coordinates r and φ. In any plane of the incoming and the reflected ray we can write now from the previous example: {\displaystyle {\begin{aligned}r&=a\sin \alpha ,\\\mathrm {d} r&=a\cos \alpha \,\mathrm {d} \alpha ,\end{aligned}}} while the impact area element is ${\displaystyle \mathrm {d} \sigma =\mathrm {d} r(r)\times r\,\mathrm {d} \varphi ={\frac {a^{2}}{2}}\sin \left({\frac {\theta }{2}}\right)\cos \left({\frac {\theta }{2}}\right)\,\mathrm {d} \theta \,\mathrm {d} \varphi .}$ Using the relation for the solid angle in the spherical coordinates: ${\displaystyle \mathrm {d} \Omega =\sin \theta \,\mathrm {d} \theta \,\mathrm {d} \varphi }$ and the trigonometric identity ${\displaystyle \sin \theta =2\sin \left({\frac {\theta }{2}}\right)\cos \left({\frac {\theta }{2}}\right),}$ we obtain ${\displaystyle {\frac {\mathrm {d} \sigma }{\mathrm {d} \Omega }}={\frac {a^{2}}{4}},}$ while the total cross section as we expected is ${\displaystyle \sigma =\oint _{4\pi }{\frac {\mathrm {d} \sigma }{\mathrm {d} \Omega }}\,\mathrm {d} \Omega =\pi a^{2}.}$ As one can see, it also agrees with the result from the Example 1 if the photon is assumed to be a rigid sphere of zero radius. ## ReferencesEdit Notes 1. ^ International Bureau of Weights and Measures (2006), The International System of Units (SI) (PDF) (8th ed.), pp. 127–28, ISBN 92-822-2213-6, archived (PDF) from the original on 2017-08-14 2. ^ Bajpai, P. K. "2. Spectrophotometry". Biological Instrumentation and Biology. ISBN 81-219-2633-5. 3. ^ Bajpai, P. K. "2. Spectrophotometry". Biological Instrumentation and Biology. ISBN 81-219-2633-5. 4. ^ IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version:  (2006–) "Scattering cross section, σscat". 5. ^ M. Xu, R. R. Alfano (2003). "More on patterns in Mie scattering". Optics Communications. 226: 1–5. Bibcode:2003OptCo.226....1X. doi:10.1016/j.optcom.2003.08.019. Sources • J. D. Bjorken, S. D. Drell, Relativistic Quantum Mechanics, 1964 • P. Roman, Introduction to Quantum Theory, 1969 • W. Greiner, J. Reinhardt, Quantum Electrodynamics, 1994 • R. G. Newton. Scattering Theory of Waves and Particles. McGraw Hill, 1966. • R. C. Fernow (1989). Introduction to Experimental Particle Physics. Cambridge University Press. ISBN 0-521-379-407.
2018-09-24 15:19:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 32, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8766480684280396, "perplexity": 449.43626892958986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160568.87/warc/CC-MAIN-20180924145620-20180924170020-00554.warc.gz"}
https://homework.cpm.org/category/MN/textbook/cc2mn/chapter/4/lesson/4.3.3/problem/4-121
### Home > CC2MN > Chapter 4 > Lesson 4.3.3 > Problem4-121 4-121. Kris said, “The Rawlings Rockets basketball team does not have any really tall players.” These are the players’ heights in inches: $70$, $77$, $75$, $68$, $88$, $70$, and $72$. 1. Which number does not seem to fit this set of data? Is there one height that is significantly higher or lower than the rest of the data? The player that is $88$ inches tall is significantly taller than the rest of the group. 2. Do you agree or disagree with Kris? Explain. Try converting these heights into feet and comparing these to your own height. The player that is $88$ inches tall is $7$ feet $4$ inches.
2021-05-11 17:04:59
{"extraction_info": {"found_math": true, "script_math_tex": 11, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6194421052932739, "perplexity": 1067.1520609682696}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.10/warc/CC-MAIN-20210511153555-20210511183555-00013.warc.gz"}
https://paperswithcode.com/paper/enhancement-of-the-axion-decay-constant-in
# Enhancement of the Axion Decay Constant in Inflation and the Weak Gravity Conjecture 6 Jun 2019Pran NathMaksim Piskunov Models of axion inflation based on a single cosine potential require the axion decay constant $f$ to be super-Planckian in size. However, $f > M_{Pl}$ is disfavored by the Weak Gravity Conjecture (WGC)... (read more) PDF Abstract No code implementations yet. Submit your code now
2020-07-10 03:08:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6440240144729614, "perplexity": 8253.278546862155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655902496.52/warc/CC-MAIN-20200710015901-20200710045901-00579.warc.gz"}
http://www.mathemafrica.org/?cat=121&paged=3
Fundamental theorem of calculus example We did an example today in class which I wanted to go through again here. The question was to calculate $\frac{d}{dx}\int_a^{x^4}\sec t dt$ We spot the pattern immediately that it’s an FTC part 1 type question, but it’s not quite there yet. In the FTC part 1, the upper limit of the integral is just $x$, and not $x^4$. A question that we would be able to answer is: $\frac{d}{dx}\int_a^{x}\sec t dt$ This would just be $\sec x$. Or, of course, we can show that in exactly the same way: $\frac{d}{du}\int_a^{u}\sec t dt=\sec u$ That’s just changing the names of the variables, which is fine, right? But that’s not quite the question. So, how can we convert from $x^4$ to $u$? Well, how about a substitution? How about letting $x^4=u$ and seeing what happens. This is actually just a chain rule. It’s like if I asked you to calculate: $\frac{d}{dx} g(x^4)$. You would just say: Let $x^4=u$ and then we have: $\frac{d}{dx} g(x^4)=\frac{du}{dx}\frac{d}{du}g(u)=4x^3 g'(u)$.… Advice for MAM1000W students from former MAM1000W students – part 5 While I resisted Mam1000W every single day, I even complained about how it isn’t useful to myself. Little did I know when it all finally clicked towards the end that even though I wasn’t going to be using math in my life directly, the methodology of thinking and applying helps me to this day. Surviving Mam1000W isn’t really a miraculous thing. While everyone tends to make it seem like it’s impossible, it is challenging (Not hard) and I said that because I have seen first-hand that practice makes it better each time. Getting to know the principles by actually doing the tuts which is the most important element of the course in my opinion will make sure that even though you feel like you aren’t learning anything when the time comes (usually 2nd semester) it will all click on how you actually are linking the information together. Another important aspect is playing the numbers game. Missing lectures after writing a challenging test – thoughts from a recent MAM1000W student The following is by one of the current undergraduate tutors for MAM1000W, Nthabiseng Machethe, who has been providing me with extremely useful feedback and her thoughts on the course from the perspective of a recent student of it. She wrote this to me after a lot of students were disappointed with their marks from the last test. —— This is based on my perspective as a student. I always plan to attend lectures, however as the work load increases and exhaustion kicks in, it is difficult to keep up with the plan. It is easy to think of the things one may want (like excelling in MAM1000W) but realistically, it is hard to achieve them. In most instances, you find that students are studying a certain concept with a short term vision (passing a test), which can give one instant gratification but may not sustain in the long run (exam). Hence, one tends to quarrel about the time spent studying for the test not equating to the marks. 2017 2/3rds numbers game This is the fourth year that I’ve played the 2/3rds numbers game with my first year maths class. I’m always interested to see how, knowing previous results will affect this year’s results. Of course I am sure that a great deal depends on exactly how I explain the game, and so I imagine that this is the largest confounding factor in this ‘study’. If you don’t know about the 2/3rds numbers game, take a look at the post here. Here are the histograms from the last three years: This year I told the class the mean results from the previous years to see if it would make a difference (as it seemed to last year). This year, the results are somewhat lower: The winner was thus the person who got closest to 2/3 of 24.4=16.3. This year one person guessed 16, and one person guessed 16.2. Because everyone was asked to write down an integer, unfortunately I can’t claim that 16.2 is the winner, but they will get a second prize.… Some more volume visualisations Here is an animation which may help you imaging a shape which has a circular base, with parallel slices perpendicular to the base being equilateral triangles: The same thing, where the slices are squares. And here is the region in the (x,y) plane between $y=\sqrt{x}$, the x-axis and the line x=1. rotated about the y-axis. Here a thin shell is drawn in the volume, then pulled out. Then it is replaced, then the volume is filled with shells, and each of them is pulled out of the volume vertically. This is to give you an idea about how to visualise the method of cylindrical shells. How clear is this post? Gallery Guidelines for visualising and calculating volumes of revolution I have seen some people try to blindly use the formulae for volumes of revolution by cylindrical cross-sections and by cylindrical shells, and I thought that I would write a guide as to how I would recommend tackling such problems, as generally just using the formulae will lead you down blind alleys. I’ve created an example, with an animation, which I hope will help to master this technique. So, here is a relatively fool-proof strategy: 1. Draw the region which you are going to have to rotate around some axis. This will generally be a matter of: • Drawing the curves that you have been given • Finding where they intersect 2. Draw the line about which you are supposed to rotate the region 3. Draw the reflection of the region about the line of rotation: This gives you a slice through the volume that will be formed 4. Now you have to decide which method to use: • Take a slice through the volume perpendicular to the axis of rotation. Using integration to calculate the volume of a solid with a known cross-sectional area. Hi there again, I have not written a post in while, here goes my second post. I would like us to discuss one of the important applications of integration. We have seen how integration can be used to solve the area problem, in this post we are going to see how we can use a similar idea to solve the volume problem. I suggest that we start by looking at the solids whose volume we know very well. You should be able to calculate the volumes of the cylinders below (yes,  they are all cylinders.) Cylinders are nice, we only need to multiply the cross-sectional area by the height/length to find the volume. This is because they have two identical flat ends and the same cross-section from one end to the other. Unfortunately, not all the solid figures that we come across everyday are cylinders. The figures below are not cylinders.… Introduction to trigonometric substitution I have decided to start writing some posts here, and this is my first post. I would like to introduce trig substitution by presenting an example that you have seen before. Trig substitution is one of the techniques of integration, it’s like u substitution, except that you use a trig function only. Let’s get into the example already! $\int_{-1}^{1} \sqrt{1-x^2} dx$ If you equate the integrand to y (and get $x^2+y^2=1$, $y\geq 0$), you should be able to see that this is the area of the upper half of a unit circle. The answer to this definite integral is therefore the area of the upper half of the unit circle (yes, the definite integral of f(x) from a to b gives you the net area between f(x) and the x-axis from x=a to x=b), is $\frac{\pi}{2}$. We relied on the geometrical interpretation of the integral to solve the definite integral, but can we also show this algebraically?… Riemann sums to definite integral conversion In the most recent tutorial there is a question about converting a Riemann sum to a definite integral, and it seems to be tripping up quite a few students. I wanted to run through one of the calculations in detail so you can see how to answer such a question. Let’s look at the example: $\lim_{n\rightarrow\infty}\sum_{i=1}^n\left(9\left(4+(i-1)\frac{6}{n}\right)^2-8\left(4+(i-1)\frac{6}{n}\right)+7\right)\frac{13}{n}$ There are many ways to tackle such a question but let’s take one particular path. Let’s start by the fact that when the limit is defined, the limit of a sum is the sum of the limits. We can split up our expression into 3, which looks like: $\lim_{n\rightarrow\infty}\sum_{i=1}^n9\left(4+(i-1)\frac{6}{n}\right)^2\frac{13}{n}-\lim_{n\rightarrow\infty}\sum_{i=1}^n\left(8\left(4+(i-1)\frac{6}{n}\right)\right)\frac{13}{n}+\lim_{n\rightarrow\infty}\sum_{i=1}^n7\frac{13}{n}$ Let’s tackle each of these separately. Let’s look at the first term: $\lim_{n\rightarrow\infty}\sum_{i=1}^n9\left(4+(i-1)\frac{6}{n}\right)^2\frac{13}{n}$ Well, we can take the factor of 13 outside the front of the whole thing to start with, along with the factor of 9, and this will give $13\times 9\lim_{n\rightarrow\infty}\sum_{i=1}^n\left(4+(i-1)\frac{6}{n}\right)^2\frac{1}{n}$ We see here that we have a sum of terms, and a factor which looks like $\frac{1}{n}$ in each term.… Some sum identities During tutorials last week, a number of students asked how to understand identities that are used in the calculation of various Riemann sums and their limits. These identities are: $\sum_{i=1}^n 1=n$ $\sum_{i=1}^n i=\frac{n(n+1)}{2}$ $\sum_{i=1}^n i^2=\frac{n(n+1)(2n+1)}{6}$ $\sum_{i=1}^n i^3=\left(\frac{n(n+1)}{2}\right)^2$ Let’s go through these one by one. We must first remember what the sigma notation means. If we have: $\sum_{i=1}^n f(i)$ It means the sum of terms of the forms f(i) for i starting with 1 and going up to i=n. Sometimes n will actually be an integer, and sometimes it will be left arbitrary. So, the above sum can be written as: $\sum_{i=1}^n f(i)=f(1)+f(2)+f(3)+f(4)+....+f(n-2)+f(n-1)+f(n)$ We haven’t specified what f is, but that’s because this statement is general and applies for any time of function of i. In the first of the identities above, the function is simply f(i)=1, which isn’t a very interesting function, but it still is one. It says, whatever i we put in, output 1. So this sum can be written as: $\sum_{i=1}^n 1=1+1+1+1+....+1$ Where there are n terms.…
2022-05-19 19:05:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 29, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7532331943511963, "perplexity": 333.70728984531166}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662529658.48/warc/CC-MAIN-20220519172853-20220519202853-00118.warc.gz"}
https://chemistry.stackexchange.com/questions/82136/how-to-determine-the-correct-inchi-for-a-certain-compound/82150
# How to determine the correct InChI for a certain compound? Let's say I have a compound L-xylulose 1-phosphate and I want to know its correct InChI, how to do this? The reason I ask is because when I go to different databases I get different results: $\ce{C5H11O8P}$ InChI=1S/C5H11O8P/c6-1-3(7)5(9)4(8)2-13-14(10,11)12/h3,5-7,9H,1-2H2,(H2,10,11,12)/t3-,5+/m0/s1 $\ce{C5H11O8P}$ InChI=1S/C5H11O8P/c6-3-1-12-5(8,4(3)7)2-13-14(9,10)11/h3-4,6-8H,1-2H2,(H2,9,10,11)/t3-,4+,5?/m0/s1 $\ce{C5H10O8P}$ InChI=1S/C5H11O8P/c6-3-1-12-5(8,4(3)7)2-13-14(9,10)11/h3-4,6-8H,1-2H2,(H2,9,10,11)/t3-,4+,5?/m0/s1 So, ChEBI and NIKKAJI do show the same chemical formula but different InChI expressions, while NIKKAJI and ModelSeed differ regarding their chemical formula but have the same InChI (which seems wrong, as ModelSeed's InChI suggests that it also als 11 Hs). But this would still not explain the differences for the first two... That all looks quite fishy to me; how can I now decide which of the databases show me the correct information? Is it possible that one gets the same InChI for compounds that differ in their chemical formula (I thought this wouldn't be possible)? One of the goals of the InChI project was to ensure uniqueness: [1] Strict uniqueness of identifier The same label always means the same substance, and the same substance always receives the same label (under the same labelling conditions). This is achieved through a well-defined procedure of obtaining canonical numbering of atoms. Whilst this is often the case for InChI strings, there are some (complex) examples where the above goal has not been met, for example in the natural product Spongistatin (below) where two isomers incidentally have the same InChI key: [2] InChI=1S/C63H95ClO21/c1-33(19-42(67)18-17-35(3)64)20-53-55(72)57-39(7)58(79-53)59(73)63(75)31-51(70)37(5)52(85-63)16-14-12-13-15-44-22-43(68)27-61(81-44)29-47(76-11)23-45(82-61)25-50(69)38(6)56(78-41(9)66)36(4)34(2)21-49-28-60(10,74)32-62(84-49)30-48(77-40(8)65)24-46(83-62)26-54(71)80-57/h13,15,17-18,36-39,42-49,51-53,55-59,67-68,70,72-75H,1-3,12,14,16,19-32H2,4-11H3/b15-13-,18-17+/t36-,37+,38+,39+,42+,43+,44+,45-,46+,47+,48-,49+,51-,52-,53-,55+,56+,57-,58+,59+,60+,61+,62+,63-/m0/s1 Although InChI can sometimes break (as per above) most of the issues with InChI strings are with implementation (how the structure is parsed to the thing generating the string in the first place). Several (common) features are at present un-supported by the InChI implementation: • Polymers • Complex organometallics • Markush structures • Mixtures • Conformers • Excited state and spin isomers • Local stereochemistry/chirality • Topological isomers • Cluster molecules • Polymorphs • Unspecific isotopic enrichment • Reactions Generation of InChI strings InChI is, by nature, an algorithm designed to be ran by a computer. Whilst strings can be parsed by humans (with ennough effort), the complexity of the strings is such that it is challenging to ensure they're correct. The InChI FAQ specifically deals with this: You should not do so (though you of course can). This may give apparently reasonable answers but it is error-prone and may break relations in the InChI. The most recent implementation of InChI is provided by the InChI trust, open source and free of charge.[3] As alluded to above, a common source of InChI errors is the way in which the structure of interest is passed to the algorithm rather than with a fundamental flaw with the process used to generate it. Sh*t in, sh*t out, so to speak. To give a concrete example, consider the heterocyclic system below: Clearly, the two tautomeric forms cannot be distinguished chemically, but depending on how the InChI string is generated, they may end up having the same or different strings. In this case, we need to specify to the InChI algorithm whether we want to fix the hydrogens (to show a single tautomer, each of which would have a unique InChI string), or not fix them (such that both tautomers have the same InChI string) Validation The real question you're possibly interested in isn't How to determine the correct InChI for a certain compound?, but rather How to validate an InChI string for a certain compound?. Given the complexity of the InChI strings, this is a challenging thing, and to my knowledge there is no tool which allows a string to be provided and says whether it is valid or not (similarly you can't provide a proposed IUPAC name to any tool which will say whether it is the IUPAC preferred name). One thing you can do is use the InChI string to generate a structure (ChemDraw does this). Using your strings from the question, it's evident that they actually refer to different forms of the molecule (cyclic vs acyclic, just imagine the terminal primary alcohol attacking the ketone). Chemically, it may be that the cyclic and acyclic forms are in equilibrium- in this case there is no way for InChI to represent the mixture (and hence which structure used to generate the string is ambiguous). It could also be that they are separable chemically and not interconverting, in which case quite rightly they should have different InChI strings. [1]: Journal of Cheminformatics, 2015, 7, 23 [2]: http://www-jmg.ch.cam.ac.uk/data/inchi/ Accessed 3-Sept-2017
2020-07-06 12:24:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5378763675689697, "perplexity": 1482.2397848805438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655880616.1/warc/CC-MAIN-20200706104839-20200706134839-00195.warc.gz"}
https://www.gamedev.net/forums/topic/638600-dx11-adapter-with-emtpy-output/
Followers 0 # DX11 dx11 adapter with emtpy output ## 2 posts in this topic Hello gamedev.net, I was struggling against a Problem with the EnumAdapter1 function of DXGI. What I've supposed to do was to switch from my onboard chip to my nvidia geforce gt550m (laptop), because my onboard chip is DX10.1 and it runs just fine, when reverting from DX11. printf("Adapter: %d (%s)\n", adapterCount, m_videoCardDescription); Shows my nvidia chip, but when trying to use this card later, it can't find Outputs: if(FAILED(adapter->EnumOutputs(OutputCount, &adapterOutput))) break; Here is a snippet: [...] int adapterChoosen = 0; if(adapterCount > 1) { } else if(adapterCount == 1) { } else { log(LOG_ERROR, "No suitable adapter found!"); return false; } log(LOG_ERROR, "D3DClass::Initalize IDXGIFactory::EnumAdapters failed"); return false; } int OutputCount; for(OutputCount = 0; ; OutputCount++) { //adapter->EnumOutputs(OutputIndex, &adapterOutput) == S_OK) //printf("Output found: %d\n", OutputIndex); break; } [...] My chip can handle DX11 as I can start other DX11 applications, which do work. - Regards PS: sorry for bad english 0 ##### Share on other sites Are you able to see your output when you query for the onboard graphics?  If so, then do you need to plug your monitor directly into the graphics card output in order to use it as an output for that adapter?  I haven't done much with multi-adapter/multi-output setups, so that is just a guess... 0 ##### Share on other sites Are you able to see your output when you query for the onboard graphics?  If so, then do you need to plug your monitor directly into the graphics card output in order to use it as an output for that adapter?  I haven't done much with multi-adapter/multi-output setups, so that is just a guess... I noticed, when printing the adapters to console, it looks like this: Searching for adapters... Adapter: 0 (Intel(R) HD Graphics Family) Adapter: 1 (NVIDIA GeForce GT 540M) When typing in 0, the application runs on DirectX 10.1 on my internal chip. When typing in 1, the application crashes (D3DClass::Initialize IDXGIAdapter::EnumOutputs failed) so even if I can read out the geforce adapter, I can't use it. Now I found a workaround by opening the NVIDIA Control Panel and changing in the 3d-settings for this special application the preferred processor from "global (Intel(R) HD Graphics ...)" to the nvidia chip. The console has changed: Searching for adapters... Adapter: 0 (NVIDIA GeForce GT 540M) Adapter: 1 (NVIDIA GeForce GT 540M) On "adapter 0" it chooses my nvidia chip and everything initialize correctly but even here, typing in "1" forces the application to crash with the same error as above. This is strange to me because the application can see this "adapter 1". Is there any explanation of this behavior? regards 0 ## Create an account or sign in to comment You need to be a member in order to leave a comment ## Create an account Sign up for a new account in our community. It's easy! Register a new account Followers 0 • ### Similar Content • By YixunLiu Hi, I have a surface mesh and I want to use a cone to cut a hole on the surface mesh. Anybody know a fast method to calculate the intersected boundary of these two geometries? Thanks. YL • By hiya83 Hi, I tried searching for this but either I failed or couldn't find anything. I know there's D11/D12 interop and there are extensions for GL/D11 (though not very efficient). I was wondering if there's any Vulkan/D11 or Vulkan/D12 interop? Thanks! • Hi Guys, I am just wondering if it is possible to acquire the address of the backbuffer if an API (based on DX11) only exposes the 'device' and 'context' pointers? Any advice would be greatly appreciated • bool InitDirect3D::Init() { if (!D3DApp::Init()) { return false; } //Additional Initialization //Disable Alt+Enter Fullscreen Toggle shortkey IDXGIFactory* factory; CreateDXGIFactory(__uuidof(IDXGIFactory), reinterpret_cast<void**>(&factory)); factory->MakeWindowAssociation(mhWindow, DXGI_MWA_NO_WINDOW_CHANGES); factory->Release(); return true; } As stated on the title and displayed on the code above, regardless of it Alt+Enter still takes effect... I recall something from the book during the swapChain creation, where in order to create it one has to use the same factory used to create the ID3D11Device, therefore I tested and indeed using that same factory indeed it work. How is that one particular factory related to my window and how come the MakeWindowAssociation won't take effect with a newly created factory? Also what's even the point of being able to create this Factories if they won't work,?(except from that one associated with the ID3D11Device) • By ProfL Can anyone recommend a wrapper for Direct3D 11 that is similarly simple to use as SFML? I don't need all the image formats etc. BUT I want a simple way to open a window, allocate a texture, buffer, shader. • 12 • 28 • 14 • 11 • 35
2017-07-23 09:28:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2051037698984146, "perplexity": 4897.16605374934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424296.90/warc/CC-MAIN-20170723082652-20170723102652-00189.warc.gz"}
http://www.talkstats.com/threads/how-to-calculate-the-overall-mean-of-several-likert-scale-data.23973/
# How to calculate the overall mean of several Likert scale data? #### wei6722 ##### New Member Hi, I have a set of questionnaire as follow: Perceived Usefulness Rating(5-point Likert Scale) PU01 PU02 PU03 ... I used SPSS to compute the mean of these data. The output as follow: Descriptive Statistic N Minimum Maximum Mean Standard Deviation PU01 PU02 PU03 Valid(listwise) The result show that mean is compute for each PU01, PU02, PU03. This is not the result that I desired. The question is how can I calculate the mean of the PU01, PU02, PU03 under a category which is Perceived Usefulness? *English is not my native language. Thank You. P #### parsec2011 ##### Guest You need to give a value to your Likert-scale items. That is to say: Strongly agree = 5 Agree = 4 Neither agree nor disagree =3 Disagree = 2 Strongly disagree = 1 Example: Lets say respondent 1 answered disagree, respondent 2 answered strongly disagree, and respondent 3 answered neither agree nor disagree. In this case, you have three "converted" values: 2, 1, 3. Its mean or average equals 2. I believe it is more meaningful to use a different scale to 1,2,3,4,5 for your code. If you use multiples, 2.5, 5.0, 7.5, 10, you get somewhat of a more meaningful scale in my opinion. Although I am unsure whether there would be any serious statistical implications. http://www.uni.edu/its/support/article/604 All the best, Ramon #### SmoothJohn ##### New Member But what would such a mean mean? #### helicon ##### Member If you're using a validated scale the original authors will have provided instructions on how it should be scored. If you're asking for advice on how to obtain a mean score across the items in SPSS then you can run the following syntax (adjusting for however many items in the scale): Code: compute PUmean = mean(PU01 to PU03). exe. Last edited: #### SiBorg ##### New Member If you're using a Likert scale, consider using the median rather than the mean, as you're dealing with ordinal rather than continuous data. #### bryangoodrich ##### Probably A Mammal I believe it is more meaningful to use a different scale to 1,2,3,4,5 for your code. If you use multiples, 2.5, 5.0, 7.5, 10, you get somewhat of a more meaningful scale in my opinion. Although I am unsure whether there would be any serious statistical implications. Psychology isn't my area, but I'm curious to know what research there is to back that up or that discusses this issue? It's something I've wondered about. I know more ratings (say, 7-point scale) can give you a finer partition of the provided space (metric), so numerical values like means more closely match their real-number space counterparts, but I would like to find some articles that talk about using different numerical encodings and their benefits, such as your suggested multiples. #### noetsi ##### Fortran must die I think in practice it would depend heavily on how people interpreted those codes. For an ordinal scaled variable I am not sure of the logic of assigning any particular scale more or less value. If you chose to interpret a likert scale as interval like (as is often done, but much debated) the mean of the later scale would be more confusing or less simple in any case.
2018-01-20 04:57:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6305536031723022, "perplexity": 1580.734521502222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889325.32/warc/CC-MAIN-20180120043530-20180120063530-00200.warc.gz"}
https://emacs-china.org/t/esc/11475
# 如何正确的绑定esc按键 https://github.com/emacs-evil/evil/blob/d14a349fcf7acf429ab9aa5e0faf43c816559fe7/evil-core.el#L593 When enabled, `evil-esc-mode` modifies the entry of \e in `input-decode-map`. If such an event arrives, it is translated to a plain `'escape` event if no further event occurs within `evil-esc-delay` seconds. Otherwise no translation happens and the ESC prefix map (i.e. the map originally bound to \e in `input-decode-map`) is returned." 1 个赞
2023-02-02 07:22:27
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8287196159362793, "perplexity": 5022.843131381815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499967.46/warc/CC-MAIN-20230202070522-20230202100522-00087.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/in-given-figure-prove-that-cd-da-ab-bc-2ac-criteria-for-congruence-of-triangles_63144
# In the Given Figure, Prove That: Cd + Da + Ab + Bc > 2ac - Mathematics In the given figure, prove that: CD + DA + AB + BC > 2AC #### Solution We have to prove that  CD + DA + AB + BC > 2AC In  ΔABC we have AB + BC > AC (As sum of two sides of triangle is greater than third one)   ........(1) In  ΔACDwe have AD + CD > AC (As sum of two sides of triangle is greater than third one)   .........(2) Hence Adding (1) & (2) we get AB + BC + AC + CD > 2AC Proved. Concept: Criteria for Congruence of Triangles Is there an error in this question or solution? #### APPEARS IN RD Sharma Mathematics for Class 9 Chapter 12 Congruent Triangles Exercise 12.6 | Q 7.1 | Page 81
2021-04-10 22:40:54
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8541622161865234, "perplexity": 1423.118490407715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038059348.9/warc/CC-MAIN-20210410210053-20210411000053-00168.warc.gz"}
https://math.stackexchange.com/questions/620145/understanding-definition-of-big-o-notation
# Understanding definition of big-O notation In a textbook, I came across a definition of big-oh notation, it goes as follows: We say that $f(x)$ is $O(g(x))$ if there are constants $C$ and $k$ such that $$|f(x)| \le C|g(x)|$$ whenever $x \gt k$. If I'm not mistaken, this basically translates to: $$f(x) = O(g(x)) \Leftarrow\Rightarrow (\exists C,\exists k|C,k \epsilon \Bbb R \land (x \gt k \rightarrow |f(x)| \le C|g(x)|))$$ Now, I have two questions regarding this statement: 1. Is my verbose translation correct? 2. What exactly does this definition of big-oh notation mean about the usage of big-oh notation, because from what I understand through computer science, big-oh is used to represent the computational complexity of an algorithm. So how does this relate to the complexity of an algorithm (if at all)? Your translation is correct. The intuition behind big-oh notation is that $f$ is $O(g)$ if $g(x)$ grows as fast or faster than $f(x)$ as $x \rightarrow \infty$. This is used in computer science whenever studying the time complexity of an algorithm. Specifically, if we let $f(n)$ be the run-time (number of steps) that an algorithm takes on an $n$ bit input to give an output, then it may be useful to say something like $f$ is $O(n^2)$, so we know that the algorithm is relatively fast for large inputs $n$. On the other hand, if all we knew was $f$ is $O(2^n)$, then $f$ might run too slowly for large inputs. Note I say "might" here, because big-oh only gives you an upper bound, so $n^2$ is $O(2^n)$ but $2^n$ is not $O(n^2)$. The definition essentially indicates that Big-Oh notation is a tool to denote an upper bound for a function. That is, if $f(x)$ is $\mathcal{O}(2^x)$, that means that, beyond some point, a constant multiple of $2^x$ will always be bigger than $f(x)$. This is used in computer science to indicate that, in the long run, we can just assume it takes $2^x$ operations to perform the algorithm (because it's close enough).
2021-04-23 06:15:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9826371073722839, "perplexity": 103.33709327515803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039601956.95/warc/CC-MAIN-20210423041014-20210423071014-00474.warc.gz"}
https://socratic.org/questions/how-do-you-determine-whether-x-2-is-a-factor-of-the-polynomial-x-4-2x-2-3x-4
# How do you determine whether x+2 is a factor of the polynomial x^4-2x^2+3x-4? Feb 27, 2017 $\left(x + 2\right) \text{ is not a factor of the poly.}$ #### Explanation: As per the Factor Theorem, $\left(p x + q\right) \text{ is a Factor of a Poly. } f \left(x\right) \iff f \left(- \frac{q}{p}\right) = 0.$ We have, $\left(p x + q\right) = x + 2 , \text{ so that, "p=1, q=2," whence, } - \frac{q}{p} = - 2 , \mathmr{and} , f \left(x\right) = {x}^{4} - 2 {x}^{2} + 3 x - 4.$ $\therefore f \left(- \frac{q}{p}\right) = f \left(- 2\right) = {\left(- 2\right)}^{4} - 2 {\left(- 2\right)}^{2} + 3 \left(- 2\right) - 4$ $= 16 - 8 - 6 - 4 = - 2 \ne 0.$ $\therefore \left(x + 2\right) \text{ is not a factor of the poly. } f \left(x\right) .$
2020-04-06 17:55:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7144922614097595, "perplexity": 3059.7288650278574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371656216.67/warc/CC-MAIN-20200406164846-20200406195346-00008.warc.gz"}
https://www.albert.io/learn/linear-algebra/question/elementary-row-operations-and-socks-shoes
Limited access Let $A$ and $B$ be $4\times4$ matrices and suppose the following sequence of elementary row operations transforms $A$ to $B$: first swap row $1$ and $2$; second replace row $4$ by row $3$ plus row $4$; third multiply row $3$ by $-2$. We can transform $B$ to $A$ by performing a certain sequence of elementary row operations based on the procedure above. What is the matrix that realizes the FIRST of these operations? A $\begin{pmatrix}1&0&0&0\\\ 0&1&0&0\\\ 0&0&1&0\\\ 0&0&1&1\end{pmatrix}$ B $\begin{pmatrix}1&0&0&0\\\ 0&1&0&0\\\ 0&0&1&0\\\ 0&0&-1&1\end{pmatrix}$ C $\begin{pmatrix}1&0&0&0\\\ 0&1&0&0\\\ 0&0&-\cfrac{1}{2}&0\\\ 0&0&0&1\end{pmatrix}$ D $\begin{pmatrix}1&0&0&0\\\ 0&1&0&0\\\ 0&0&-2&0\\\ 0&0&0&1\end{pmatrix}$ E $\begin{pmatrix}0&1&0&0\\\ 1&0&0&0\\\ 0&0&1&0\\\ 0&0&0&1\end{pmatrix}$ Select an assignment template
2017-03-30 22:38:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.863254725933075, "perplexity": 217.82611444810252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218203536.73/warc/CC-MAIN-20170322213003-00566-ip-10-233-31-227.ec2.internal.warc.gz"}
http://insurethatbusiness.com/ex2g15/how-to-compare-two-predictors-c41c0b
# how to compare two predictors "Are A and B on the same scale?" compute female = 0. if gender = "F" female = 1. compute femht = female*height. It is used when we want to predict the value of a variable based on the value of two or more other variables. Thanks for contributing an answer to Cross Validated! Before comparing the predictors between two groups, what is the dependent random variable of each group and how it is measured. Each column will contain a combination. What test can I use to compare intercepts from two or more regression models when slopes might differ? Is there any better choice other than using delay() for a 6 hours delay? How to compare two different predictors. I wasn't aware of this since summary(glmerModel) gives me some F-values. In this chapter, we will examine regression equations that use two predictor variables. (In the case of my actual project, I have two models, A vs. B, that attempt to predict some phenomena and I want to test which is a stronger predictor). comparison were made of two models from differnt families. "There is no F test in logistic regression, so please clarify what kind of model you are asking about." In the case, we can compare two models, one with both categorical predictors and the other with public predictor only. I show you how to calculate a regression equation with two independent variables. Compare the squared errors of two regression algorithms using t-test. However, I want to test whether A vs. B are better predictors of Y. Therefore, … If you were curious why I say that. Adjusted R-Squared is formulated such that it penalises the number of terms (read predictors) in your model. 2. T-tests are used when comparing the means of precisely two groups (e.g. From the comparison, we have an F = 21.887 with a p-value = 1.908e-10. I have developed a new predictor based on neural networks for a specific problem in bioinformatics. ANOVA and MANOVA tests are used when comparing the means of more than two groups (e.g. What's the power loss to a squeaky chain? From all these results i have generated 9 contingency tables (one per predictor) based on the target value and the predictor response like the one below. split file off. How does "quid causae" work grammatically? I would point you towards, http://arion.csd.uwo.ca/faculty/ling/papers/ijcai03.pdf. This predictor takes as inputs several features and returns a boolean target value. How can we extend our model to investigate differences in Impurity between the two shifts, or between the three reactors? Then we use apply which iterates over the columns in order to create the formulas.paste creates the text representing the formula. Which variable relative importance method to use? Note that i have the results table for all cases (Ei) in my dataset for all the predictors (Pj), like: I think it's important first to define what is important in this particular problem. We can compare the regression coefficients of males with females to test the null hypothesis H 0: b f = b m, where b f is the regression coefficient for females, and b m is the regression coefficient for males. the average heights of men and women). How does one compare two nested quasibinomial GLMs? Anti-me can be fatal. How are correlation and collinearity different? To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Splines are series of polynomial segments strung together, joining at knots. How should I compare the predictive powers of A vs. B? A logistic regression is said to provide a better fit to the data if it demonstrates an improvement over a model with fewer predictors. I'm new to machine learning and try to clarify my problem in research. How to avoid collinearity of categorical variables in logistic regression? For example, A and B are two variables that I want to compare their contribution to ML accuracy. Is there any way to compare these statistical tables in such a manner that i can state that my predictor is better or worse than any of the other predictors supported by a significant p-value? A predictor variable explains changes in the response.Typically, you want to determine how changes in one or more predictors are associated with changes in the response. We then use female, height and femht as predictors in the regression equation. combn will create a matrix with all the 2-way combinations. What do you mean by "predictive power"? Keywords: machine learning; ... more popular in life sciences over the last two decades. Or you can use F test if you have Independent tests. As a generalization, let’s say that we have p predictors. Your question seems to deal with both linear regression/ANOVA and logistic regression. the average heights of children, teenagers, and adults). Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. MathJax reference. I meant this: "Do you mean which is more strongly-related to the outcome in your logistic regression model?" Use MathJax to format equations. To learn more, see our tips on writing great answers. Why is my 50-600V voltage tester able to detect 3V? How to \futurelet the token after a space. Consider a study of the analgesic effects of treatments on elderly patients with neuralgia. Active 6 years, 8 months ago. I just wonder if I can compare the importance of two different variables in two different sorts. Another way to write this null hypothesis is H 0: b m – b m = 0 . I'm trying to compare AUC for two ROC curves. Therefore when comparing nested models, it is a good practice to compare using adj-R-squared rather than just R-squared. The response variable is whether the patient reported pain or not. Understanding Irish Baptismal registration of Owen Leahy in 19 Aug 1852, How does one maintain voice integrity when longer and shorter notes of the same pitch occur in two voices, I'm a piece of cake. I know if I put the predictors in the model, the records will be excluded by LOGISTIC. rev 2020.12.14.38165, Sorry, we no longer support Internet Explorer, The best answers are voted up and rise to the top, Cross Validated works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. by Karen Grace-Martin 4 Comments. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. However, I want to test whether A vs. B are better predictors of Y. So I run a linear regression: This gives me an ANOVA table showing that the F-value associated with A and B are both significant. Many studies have been done to compare predictors of student adoration for statistics instructors. I'm not sure whether the command of -lincom- … predictor variables (we will denote these predictors X 1 and X 2). If I do this, should the F-critical value have DF1 = n-2, DF2 = n-2, where n = number of subjects? It only takes a minute to sign up. Additionally i have runned my dataset through other already published predictors (none of which based on neural networks). One great thing about logistic regression, at least for those of us who are trying to learn how to use it, is that the predictor variables work exactly the … If you have been using Excel's own Data Analysis add-in for regression (Analysis Toolpak), this is the time to stop. Do you mean which is more strongly-related to the outcome in your logistic regression model? How to view annotated powerpoint presentations in Ubuntu? Comparing the slopes of the regression seems not appropriate since the value distributions of A and B may have different variances. The notation for a raw score regression equation to predict the score on a quantitative Y outcome variable from scores on two X variables is as follows: Y′=b 0 + b 1 X 1 + b 2 X 2. Get the first item in a sequence that matches a condition. I could not find any literature to support this; and I did see one paper that explicited stated (with no theoretical justification) that it was fine to compare different families, so I ran a simulation … Multiple regression is an extension of simple linear regression. Tutorial on how to calculate Multiple Linear Regression using SPSS. Movie with missing scientists father in another dimension, worm holes in buildings. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Is everything OK with engine placement depicted in Flight Simulator poster? You should have a healthy amount of data to use these or you could end up with a lot of unwanted noise. I want to definitively say that one is more predictive than the other one (strongly preferably using non-Bayesian statistics). learning based bioinformatics predictors for classifications Yasen Jiao and Pufeng Du* ... to rigorously compare performances of different predictors and to choose the right predictor. Compare Colleges, Universities and Institutes on the basis of courses, fees, reviews, facilities, eligibility criteria, approved intake, study mode, course duration and other parameters to choose the right college. Would this answer be most elegantly framed in terms of AIC or BIC? How to compare predictive accuracy of various predictors. None of this would change if I was doing a logistic regression and/or a multilevel model, right? As you can see text_form has all the 2 way formulas represented as text. 2020 - Covid Guidlines for travelling to Vietnam at Christmas time? How to Interpret Odd Ratios when a Categorical Predictor Variable has More than Two Levels. Sorry for that.... "Predictive power" is clearly bad phrasing. The multiple linear regression model can be extended to include all p predictors. Viewed 577 times 4 $\begingroup$ I have developed a new predictor based on neural networks for a specific problem in bioinformatics. regression /dep weight /method = enter female height femht. Using the same scale for each makes it easy to compare distributions. RegressIt also now includes a two-way interface with R that allows you to run linear and logistic regression models in R without writing any code whatsoever. Or do you mean which is going to be a better predictor of future cases? Collinearity is a linear association between two predictors. Then we can conduct a F-test for comparing the two models. The term femht tests the null hypothesis Ho: B f = B m. Relative importance of predictors in logistic regression. Asking for help, clarification, or responding to other answers. Earlier, we fit a model for Impurity with Temp, Catalyst Conc, and Reaction Time as predictors. So unlike R-sq, as the number of predictors in the model increases, the adj-R-sq may not always increase. But I have missing data for one of the predictors, and I want to ignore the missing values (instead of throwing out those records). An interaction term between two variables is needed if the effect of one variable depends on the level of the other. Ah, okay. Why is it wrong to train and test a model on the same dataset? If I can do this all with a straightforward F-test, that would be nice.). However, the F-value of A is a powerful 20, but the F-value of B is a wimpier 5. I want to definitively say that one is more predictive than the other one (preferably using non-Bayesian statistics). Ask Question Asked 6 years, 8 months ago. Kuya, a statistics instructor himself, conducted a study to compare his students’ adoration across three age groups of students: students 22 – 28 years old, 29 – 35 years, and older than 35 years. (11.1) Two test treatments and a placebo are compared. This is performed using the likelihood ratio test, which compares the likelihood of the data under the full model against the likelihood of the data under a model with fewer predictors. The variable we want to predict is called the dependent variable (or sometimes, the outcome, target or criterion variable). Dear all, With a logistic regression, now I try to compare the coefficients of two different predictors on the same dependent variable, in order to see which one is more important/salient for the prediction of DV. In my project, yes. Example 53.2 Logistic Modeling with Categorical Predictors. To break or not break tabs when installing an electrical outlet. Although, I would be curious about situations where they are not? Density Plot. Making statements based on opinion; back them up with references or personal experience. There are also plenty of other Q&A's on this site dealing with this question, e.g. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. Polynomial regression can fit nonlinear relationships between predictors and the outcome variable. Multicollinearity is a situation where two or more predictors are highly linearly related. Comparing the slopes of the regression seems not appropriate since the value distributions of A … You can also provide a link from the web. The output is shown below. Should I take the SquaredSum(A) / SquaredSum(B) = my new F-value? Are you looking for best overall accuracy, specificity, sensitivity, precision, AUC, etc? (In the case of my actual project, I have two models, A vs. B, that attempt to predict some phenomena and I want to test which is a stronger predictor). Hope that helps. In general, an absolute correlation coefficient of >0.7 among two or more predictors indicates the presence of multicollinearity. 5.5 Selecting predictors. How does one promote a third queen in an over the board game? I think if you know the measure you want to use then the results of repeated cross validation runs would provide you a sample of measures for each classifier, you could then use a simple ANOVA to determine if the means of the measure for each run were different between your classifier and the control classifiers. There is no F test in logistic regression, so please clarify what kind of model you are asking about. This predictor takes as inputs several features and returns a boolean target value. How do I compare the predictive power of two predictors within a single (logistic) regression? By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy, 2020 Stack Exchange, Inc. user contributions under cc by-sa, https://stats.stackexchange.com/questions/83780/how-to-compare-two-different-predictors/83798#83798. Can warmongers be highly empathic and compassionated? execute. Thank you for these links. When there are many possible predictors, we need some strategy for selecting the best predictors to use in a regression model. Then compare how well the predictor set predicts the criterion for the two groups using Fisher's Z-test Then compare the structure (weights) of the model for the two groups using Hotelling's t-test and the Meng, etc. Predictor variables are also known as independent variables, x-variables, and input variables. (I don't want to use Bayesian statistics for simplicity's sake if I'm explaining results to others. Linear regression models can also include functions of the predictors, such as transformations, polynomial terms, and cross-products, or interactions. The variables we are using to predict the value of the dependent variable are called the independent variables (or sometimes, the predictor, explanatory or regressor variables). I'm guessing since you said this is a specific bioinformatics problem that you probably have a measure of classifier strength in mind, but if not I'd recommend just going with AUC as it's a little more fine grained than accuracy. I've read about how F-tests can be used to compare models and to decide whether an additional variable should be included in the regression. So I run a linear regression: Y ~ A + B Z-test First we split the sample… Data Split File Next, get … What if we have more than two predictors? Are cadavers normally embalmed with "butt plugs" before burial? How should I compare the predictive powers of A vs. B? H1: effect of A on y is uesuful (model2) Then use likelihood ratio (-2log likelihood) to compare both models while keeping their variance structure the same. The rest of the variables (like C, D, and E) for each sort are the same. 769 views But there are two other predictors we might consider: Reactor and Shift.Reactor is a three-level categorical variable, and Shift is a two-level categorical variable. Are A and B on the same scale? If the independent variable (e.g., political party affiliation) has more than two levels (e.g., Democrats, Republicans, and Independents) to compare and we wish to know if they differ on a dependent variable (e.g., attitude about a tax cut), we need to do an ANOVA (ANalysis Of VAriance). Where in the rulebook does it explain how to use Wises? Click here to upload your image (max 2 MiB). 1. Where can I travel to receive a COVID vaccine as a tourist? For smoother distributions, you can use the density plot. That is, are they both 1-7 scales or are they both1/0 variables etc.? For example, you could use multiple regre… To use them in R, it’s basically the same as using the hist() function. A common approach that is not recommended is to plot the forecast variable against a particular predictor and if there is no noticeable relationship, drop that predictor from the model. Have runned my dataset through other already published predictors ( none of based... Do I compare the predictive powers of a variable based on neural networks for a problem... Interpret Odd Ratios when a categorical predictor variable has more than two within! Sciences over the columns in order to create the formulas.paste creates the representing... Making statements based on opinion ; back them up with a lot of noise! A straightforward F-test, that would be nice. ) to our terms of service, policy. Them in R, it’s basically the same scale for each sort are the same?. Of children, teenagers, and input variables if gender = F '' female = 1. compute =... A squeaky chain T-tests are used when comparing nested models, it is a where. They both 1-7 scales or are they both 1-7 scales or are they both1/0 variables etc. predictors! Of treatments on elderly patients with neuralgia ( glmerModel ) gives me some.. Just R-squared the F-critical value have DF1 = n-2, DF2 = n-2, DF2 = n-2, n. Results to others how do I compare the predictive power '' need some strategy for selecting best... And paste this URL into your RSS reader it explain how to calculate a regression equation with two variables... Patient reported pain or not break tabs when installing an electrical outlet comparison were made of two or regression... To the outcome in your logistic regression using adj-R-squared rather than just R-squared how... Rather than just R-squared through other already published predictors ( none of which based on neural for. The first item in a regression model? each makes it easy to compare predictors of Y for with! Slopes of the predictors, we will examine regression equations that use two predictor variables are also as... R, it’s basically the same as using the hist ( ) for each it! Personal experience with references or personal experience it explain how to calculate multiple linear regression shifts. Compare intercepts from two or more predictors are highly linearly related you mean which is more predictive the! The web we extend our model to investigate differences in Impurity between the two.! ) for a 6 hours delay … we then use female, height and femht as in! This all with a lot of unwanted noise feed, copy and paste this URL into RSS... To compare using adj-R-squared rather than just R-squared and E ) for each sort are the same dataset m 0... For comparing the slopes of the regression seems not appropriate since the value of two or more models! To write this null hypothesis is H 0: B m = 0 selecting best. = 1. compute femht = female * height teenagers, and input variables two! More regression models can also provide a link from the comparison, we fit a model on the value a! The number of predictors in the case, we fit a model for Impurity with,... Break or not answer be most elegantly framed in terms of AIC or BIC compare distributions RSS feed, and. By predictive power '' have DF1 = n-2, DF2 = n-2, DF2 = n-2, n... An interaction term between two groups, what is the dependent variable ( sometimes. Own Data Analysis add-in for regression ( Analysis Toolpak ), this is the dependent variable or... All the 2 way formulas represented as text are not and femht as.. The rest of the predictors in the regression equation you could end up with references or personal experience is extension... Predictors and the other with public predictor only dependent random variable of group! In Flight Simulator poster non-Bayesian statistics ) in general, an absolute correlation coefficient of > 0.7 among or... Known as independent variables time as predictors in the rulebook does it explain how to calculate a regression with! Two different variables in logistic regression model? are a and B may have different variances matrix with the! Stack Exchange Inc ; user contributions licensed under cc by-sa boolean target value functions the... General, an absolute correlation coefficient of > 0.7 among two or more indicates. More popular in life sciences over the board game two groups ( e.g use apply which over. In a regression model? and/or a multilevel model, the outcome target... Have independent tests /dep weight /method = enter female height femht of one variable depends on the level of analgesic! Of the predictors, we need some strategy for selecting the best predictors to use these or can! Predictive power '' is clearly bad phrasing the patient reported pain or not nested. Installing an electrical outlet first item in a regression model? at knots me some F-values model you asking... Their contribution to ML accuracy with neuralgia networks ) or criterion variable ), let’s say that is!, it’s basically the same scale? Analysis add-in for regression ( Analysis Toolpak ), this the. Predictor only if the effect of one variable depends on the value of two different variables in two variables... Conc, and adults ) I put the predictors between two variables is needed if the of! butt plugs '' before burial and cross-products, or between the two models, one with categorical! How does one promote a third queen in an over the last two.... Of unwanted noise is called the dependent variable ( or sometimes, the of... Elderly patients with neuralgia as a generalization, let’s say that one more. Image ( max 2 MiB ) explain how to avoid collinearity of categorical variables in logistic regression model? \begingroup... Two variables that I want to predict is called the dependent random of...: //arion.csd.uwo.ca/faculty/ling/papers/ijcai03.pdf = 1.908e-10 will examine regression equations that use two predictor (. The formulas.paste creates the how to compare two predictors representing the formula polynomial segments strung together, joining at knots also functions! The web at knots F test in logistic regression it explain how to Odd!, D, and cross-products, or interactions dependent random variable of each and... Will examine regression equations that use two predictor variables are also known as independent variables an interaction term two. Able to detect 3V strongly-related to the outcome in your logistic regression the... Have more than two predictors compare intercepts from two or more predictors indicates the presence of multicollinearity then use,... Conduct a F-test for comparing the predictors in the regression equation with two independent variables F-test for the. ) = my new F-value anova and MANOVA tests are used when comparing nested models, it measured... Asked 6 years, 8 months ago strongly-related to the outcome in your logistic regression, so please clarify kind! I put the predictors in the regression equation with two independent variables we want to definitively say that have! Good practice to compare predictors of Y test can I travel to receive Covid. The board game test a model for Impurity with Temp, Catalyst Conc, and input variables many possible,. Board game than just R-squared was n't aware of this would change if I can do this all with p-value., should the F-critical value have DF1 = n-2, DF2 = n-2, DF2 = n-2, where =! Site dealing with this question, e.g references or personal experience have runned my dataset through other already predictors... The hist ( ) function matrix with all the 2 way formulas represented as text than just R-squared variables... The 2-way combinations use to compare AUC for two ROC curves using non-Bayesian statistics ) the time to.! Plenty of other Q & a 's on this site dealing with this question, how to compare two predictors regression... With references or personal experience clarification, or between the three reactors use apply which over... = 21.887 with a lot of unwanted noise where in the rulebook does it explain how to collinearity... Before comparing the slopes of the predictors between two groups ( e.g I do this should! Sequence that matches a condition one is more strongly-related to the outcome in your logistic regression model increase... Scales or are they both1/0 variables etc. our model to investigate differences in between... ( strongly preferably using non-Bayesian statistics ) ( like C, D and! €“ B m – B m – B m – B m – m! Variables in logistic regression, so please clarify what kind of model you are asking.... Of which based on neural networks ) regression, so please clarify what kind of you. Regression equation with two independent variables, x-variables, and Reaction time as predictors in the seems. Auc for two ROC curves get the first item in a sequence that matches a condition as number! Between predictors and the other one ( strongly preferably using non-Bayesian statistics ) networks for specific! Are highly linearly related where two or more other variables to create the formulas.paste creates the text representing the.... Installing an electrical outlet them in R, it’s basically the same scale? more than two groups, is... Of this would change if I 'm explaining results to others since summary ( glmerModel ) me. Algorithms using t-test = 1. compute femht = female * height do you mean which more! When there are also plenty of other Q & a 's on site... Returns a boolean target value strung together, joining at knots OK with engine placement depicted in Flight Simulator?... An electrical outlet would be nice. ) be extended to include all p.. Scientists father in another dimension, worm holes in buildings create a matrix with the. 1. compute femht = female * height models, it is used when comparing the of... Question, e.g ( or sometimes, the adj-R-sq may not always increase new!
2021-10-19 05:11:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35175222158432007, "perplexity": 1307.2712697247223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585242.44/warc/CC-MAIN-20211019043325-20211019073325-00374.warc.gz"}
https://encyclopediaofmath.org/wiki/Gauss-Kronrod_quadrature_formula
##### Actions A quadrature formula of highest algebraic accuracy of the type \begin{equation*} \int _ { a } ^ { b } p ( x ) f ( x ) d x \approx Q _ { 2 n + 1 } ^ { G K } [ f ] = \end{equation*} \begin{equation*} = \sum _ { \nu = 1 } ^ { n } \alpha _ { \nu } f ( x _ { \nu } ) + \sum _ { \mu = 1 } ^ { n + 1 } \beta _ { \mu } f ( \xi _ { \mu } ), \end{equation*} where $x _ { 1 } , \ldots , x _ { n }$ are fixed, being the nodes of the Gauss quadrature formula $Q _ { n } ^ { G }$, and $p$ is a weight function (see Quadrature formula). Depending on $p$, its algebraic accuracy is at least $3 n + 1$, but may be higher (see Quadrature formula of highest algebraic accuracy). For the special case $p \equiv 1$, which is most important for practical calculations, the algebraic accuracy is precisely $3 n + 1$ if $n$ is even and $3 n + 2$ if $n$ is odd [a7]. The pair $( Q _ { n } ^ { G } , Q _ { 2n+1 } ^ { G K } )$ provides an efficient means for the approximate calculation of definite integrals with practical error estimate, and hence for adaptive numerical integration routines (cf. also Adaptive quadrature). Gauss–Kronrod formulas are implemented in the numerical software package QUADPACK [a6], and they are presently (1998) the standard method in most numerical libraries. The nodes $\xi _ { 1 } , \dots , \xi _ { n + 1}$ of the Gauss–Kronrod formula are the zeros of the Stieltjes polynomial $E _ { n + 1}$, which satisfies \begin{equation*} \int _ { - 1 } ^ { 1 } p ( x ) P _ { n } ( x ) E _ { n + 1 } ( x ) x ^ { k } d x = 0 , \quad k = 0 , \dots , n, \end{equation*} where $\{ P _ { n } \}$ is the system of orthogonal polynomials with respect to $p$ (cf. also Stieltjes polynomials). An iteration of these ideas leads to a nested sequence of Kronrod–Patterson formulas (cf. Kronrod–Patterson quadrature formula). For several special cases of weight functions $p$, the Stieltjes polynomials have real roots inside $[ a , b ]$ which interlace with the zeros of $P_n$. In particular, this is known for $p \equiv 1$, and in this case also the weights $\alpha _ { \nu }$ and $\beta _ { \mu }$ of the Gauss–Kronrod formulas are positive. These facts are not necessarily true in general, see [a3], [a4], [a5] for surveys. The nodes and weights of Gauss–Kronrod formulas for $p \equiv 1$ are distributed very regularly (see also Stieltjes polynomials for asymptotic formulas and inequalities). Error bounds for Gauss–Kronrod formulas have been given in [a2]. It is known that for smooth (i.e. sufficiently often differentiable) functions, Gauss–Kronrod formulas are significantly inferior to the Gauss quadrature formulas (cf. Gauss quadrature formula) which use the same number of nodes (see [a2]). Cf. also Stopping rule for practical error estimation with the Gauss and other quadrature formulas. #### References [a1] P.J. Davis, P. Rabinowitz, "Methods of numerical integration" , Acad. Press (1984) (Edition: Second) [a2] S. Ehrich, "Error bounds for Gauss–Kronrod quadrature formulas" Math. Comput. , 62 (1994) pp. 295–304 [a3] W. Gautschi, "Gauss–Kronrod quadrature — a survey" G.V. Milovanović (ed.) , Numer. Meth. and Approx. Th. , III , Nis (1988) pp. 39–66 [a4] G. Monegato, "Stieltjes polynomials and related quadrature rules" SIAM Review , 24 (1982) pp. 137–158 [a5] S.E. Notaris, "An overview of results on the existence and nonexistence and the error term of Gauss–Kronrod quadrature formulas" R.V.M. Zahar (ed.) , Approximation and Computation , Birkhäuser (1995) pp. 485–496 [a6] R. Piessens, et al., "QUADPACK: a subroutine package in automatic integration" , Springer (1983) [a7] P. Rabinowitz, "The exact degree of precision of generalised Gauss–Kronrod integration rules" Math. Comput. , 35 (1980) pp. 1275–1283 (Corrigendum: Math. Comput. 46 (1986), 226) How to Cite This Entry:
2020-07-12 07:28:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.979753851890564, "perplexity": 763.315383881808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657131734.89/warc/CC-MAIN-20200712051058-20200712081058-00198.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-with-applications-10th-edition/chapter-r-algebra-reference-r-4-equations-r-4-exercises-page-r-16/28
# Chapter R - Algebra Reference - R.4 Equations - R.4 Exercises - Page R-16: 28 The solution is $x=12$ #### Work Step by Step $\dfrac{x}{3}-7=6-\dfrac{3x}{4}$ Multiply the whole equation by $12$: $12\Big(\dfrac{x}{3}-7=6-\dfrac{3x}{4}\Big)$ $4x-84=72-9x$ Take $9x$ to the left side and $84$ to the right side: $4x+9x=72+84$ $13x=156$ Take $13$ to divide the right side: $x=\dfrac{156}{13}$ $x=12$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2019-03-20 12:00:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5331776738166809, "perplexity": 885.7040027302114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202326.46/warc/CC-MAIN-20190320105319-20190320131319-00245.warc.gz"}
https://indico.desy.de/indico/event/18498/other-view?view=standard
# Planck 2018 ## From the Planck Scale to the Electroweak Scale from to (Europe/Berlin) at Bonn Physikalisches Institut Nussallee 12 53115 Bonn Description Material: Poster Support Email: theory@physik.uni-bonn.de Go to day • Monday, May 21, 2018 • 08:00 - 09:00 Registration Location: Zeichensaal (1st floor) • 09:00 - 10:00 Review Talk Convener: Mr. Herbert Dreiner (Universitaet Bonn) • 09:00 Model Building in the LHC Era 1h0' I will give my impressions of model building in the LHC era. WHAT are the problems for theory, WHERE are the hints for new physics and WHEN will we know. Speaker: Stuart Raby (The Ohio State University) Material: • 10:00 - 10:30 Plenary Session Convener: Mr. Herbert Dreiner (Universitaet Bonn) • 10:00 Dark matter interactions and large scale structure of the universe 30' The cosmological standard model, LCDM, provides a very good overall fit to the precision cosmological data from the CMB, its polarization, baryon acoustic oscillations, measurements of large scale structure, and the current expansion rate of the universe. The most significant tension in the fit is associated with two quantities: the current expansion rate, H_0, and the amplitude of the matter power spectrum at the scale of galaxy clusters, sigma_8. I explain the physics behind this tension and present models with dark matter interactions which can remove it. I also show how future measurements of the shape of the matter power spectrum from weak lensing can be used to differentiate between models. Speaker: Martin Schmaltz (Boston University) Material: • 10:30 - 11:00 Coffee Break ( Foyer ) • 11:00 - 12:30 Plenary Session Convener: Prof. Howard Haber (Santa Cruz Institute for Particle Physics) • 11:00 ALP-driven magnetogenesis and photon-ALP-dark photon oscillations 30' String theory suggests the existence of multiple axion-like particles (ALPs) as well as hidden sector gauge bosons. Some of those ALPs and hidden U(1) gauge bosons (dark photons) might be light enough to result in interesting astrophysical and/or cosmological consequences. As an example of such possibility, we examine ALP-driven late time magnetogenesis generating the cosmic magnetic field via the generation of dark photon field. We also study the photon-ALP-dark photon oscillations in the presence of background dark photon field, and examine if such oscillations can explain the recently noticed gamma-ray spectral modulations of some galactic pulsars and supernova remnants. Speaker: Kiwoon Choi (CTPU/IBS) Material: • 11:30 Solving the Hierarchy Problem Discretely 30' We present a new mechanism for making a scalar parametrically lighter than the UV cutoff utilizing non-linearly realized discrete symmetries. The cancelations occur due to a discrete symmetry that is realized as a shift symmetry on the scalar and as an exchange symmetry on the particles with which the scalar interacts. Speaker: Anson Hook (University of Maryland) Material: • 12:00 An Update on the LHC Monojet Excess 30' While the LHC has done an excellent job of looking for new physics in all the expected places using simplified models, so far much less attention has been paid to model-independent, data-driven approaches. I will review a recently proposed method of "rectangular aggregations" that can find potentially interesting excesses in existing LHC searches without relying on simplified models. As a proof of concept, I will show how the method uncovers some highly statistically significant (and previously overlooked) discrepancies in the low-pT regions of ATLAS and CMS monojet searches. I will describe a simplified model that fits the monojet excess well, and discuss its implications. I will also discuss various issues with the background estimation and the control regions which may provide a SM explanation for this discrepancy. Regardless of whether this discrepancy is due to new physics or missing SM effects, there are interesting things going on in the monojet channel, and more generally, in the overlooked bulk of the LHC data. Speaker: David Shih (Rutgers University) Material: • 12:30 - 14:30 Lunch Break ( Foyer, Bonn ) • 14:30 - 16:00 Plenary Session Convener: Marek Olechowski • 14:30 Order-4 CP symmetry: flavor, dark matter, and beyond 30' I will describe recent results on phenomenological consequences of imposing an exotic CP-symmetry of order 4 in multi-Higgs models. Despite minimality of the assumptions, such models have well shaped scalar and flavor sectors and lead to rich phenomenology to be tested in experiments and astroparticle observations. Speaker: Dr. Igor Ivanov (CFTP, Instituto Superior Tecnico) Material: • 15:00 Improved theoretical constraints on BSM models 30' BSM models must be confronted with a series of experimental and theoretical constraints. On the theoretical side, it is standard to check the tree-level vacuum stability, the tree-level perturbative unitarity constraints in the limit of large scattering energies, and the cut-off scale using one-loop running with tree-level matching. I will discuss that these constraints can get crucial corrections: (i) loop corrections to the vacuum stability can re-open parameter regions, (ii) 2 to 2 scattering at small energies can lead to much stronger constraints, and (iii) reliable results for the high-scale behaviour of a theory are sometimes just obtained by combining two-loop running and matching. In addition, the check of the perturbative behaviour of the theory can become important. Examples for SM extensions with singlets, doublets and triplets are given. Speaker: Dr. Florian Staub (KIT) Material: • 15:30 Composite Dark Matter 30' We consider a composite model where both the Higgs and a complex scalar χ, which is the dark matter (DM) candidate, arise as light pseudo Nambu-Goldstone bosons (pNGBs) from a strongly coupled sector with TeV scale confinement. Speaker: Andreas Weiler (TUM) • 16:00 - 16:30 Coffee Break ( Foyer ) • 16:30 - 17:00 Plenary Session Convener: Dr. Thomas Flacke (IBS CTPU) • 16:30 Theories of Flavour from the Planck scale to the Electroweak Scale 20' In this talk we discuss various Theories of Flavour from the Planck scale to the Electroweak Scale, ranging from SUSY GUTs with Flavour Symmetry (with or without extra dimensions) to Flavourful Z' Models at the Electroweak scale capable of accounting for R_K(*). Speaker: Steve King (University of Southampton) Material: • 18:00 - 20:00 Welcome Location: Foyer Wegelerstr. 10 • Tuesday, May 22, 2018 • 08:00 - 09:00 Registration Location: Zeichensaal (1st floor) • 09:00 - 10:00 Review Talk Convener: Manuel Drees (Bonn University) • 09:00 Status and perspectives of BSM searches at the LHC 1h0' With the LHC Run II coming to an end and the LHC experiments already preparing for HL-LHC, this talk will review the current status of BSM searches, with particular emphasis on new analysis techniques developed in the recent past and possible implications for the LHC Run III and the long term. Speaker: Maurizio Pierini (CERN) Material: • 10:00 - 10:30 Plenary Session Convener: Manuel Drees (Bonn University) • 10:00 Higgs coupling measurements and their implications on new physics 30' With increasingly precise measurements of Higgs boson interactions, we start to set tighter limits on Standard Model extensions. First, I will review the impact of existing and ongoing Higgs coupling measurements on how they constrain such extensions. In the second part of the presentation, I will focus on one of the most important couplings of the scalar sector, the Higgs self-coupling. Given theoretical constraints on the Higgs self-coupling, the capacity of future colliders will be assessed in measuring deviations from Standard Model predictions. Speaker: Michael Spannowsky (Durham University) Material: • 10:30 - 11:00 Coffee Break ( Foyer ) • 11:00 - 12:30 Plenary Session Convener: Prof. Margarete Mühlleitner (KIT) • 11:00 LHC Recasting Tools and Applications 30' Speaker: Dr. Jong Soo Kim (University of the Witwatersrand, NITheP, MITP) Material: • 11:30 Natural SUSY from the landscape 30' I review how natural SUSY may emerge from the string landscape and present probability distributions for Higgs and sparticle masses Speaker: Prof. Howard Baer Baer (University of Oklahoma) Material: • 12:00 Searches for long-lived particles at the LHC: closing the gaps in BSM signature coverage 30' Speaker: Dr. Nishita Desai (LUPM, Montpellier) Material: • 12:30 - 14:00 Lunch Break • 14:00 - 15:00 Parallel Session Higgs and Flavor Model Building Location: Lecture Hall 1, PI • 14:00 Mass Degeneracies in Extended Higgs Sectors 20' Mechanisms that yield mass degeneracies in extended Higgs sectors with multiple Higgs doublets are examined. Some puzzling features of a recent 3HDM constructed by Ivanov and Silav are considered that appear to be connected to the mass degeneracies of their model. Speaker: Prof. Howard Haber (Santa Cruz Institute for Particle Physics) Material: • 14:20 Muon anomalous magnetic moment in the presence of CP-violation 20' I will talk about popular scalar extensions of the Standard Model, namely the singlet extension, the 2-Higgs doublet model (2HDM) and its extension by a singlet scalar. I will focus on the contributions of the added scalars to the anomalous magnetic moment of the muon, and the electric dipole moment of the electron in these models. Speaker: Dr. Venus Keus (University of Helsinki) Material: • 14:40 Third Family Quark-Lepton Unification at the TeV Scale 20' We construct a model of quark-lepton unification at the TeV scale based on an SU(4) gauge symmetry, while still having acceptable neutrino masses and enough suppression in flavor changing neutral currents. An approximate U(2) flavor symmetry is an artifact of family-dependent gauge charges leading to a natural realization of the CKM mixing matrix. The model predicts sizeable violation of PMNS unitarity as well as a gauge vector leptoquark U1 = (3,1,2/3) which can be produced at the LHC -- both effects within the reach of future measurements. In addition, recently reported experimental anomalies in semi-leptonic B-meson decays, both in charged b->cτν and neutral b->sμμ currents, can be accommodated. Speaker: Dr. Benjamin Stefanek (Johannes Gutenberg Universität Mainz) Material: • 14:00 - 15:00 Parallel Session on Experimental Searches Location: bctp, Seminar Room 1 • 14:00 IceCube bounds on sterile neutrinos above 10 eV 20' We study the capabilities of IceCube to search for sterile neutrinos with masses above 10 eV by analyzing its νμ disappearance atmospheric neutrino sample. We find that IceCube is not only sensitive to the mixing of sterile neutrinos to muon neutrinos, but also to the more elusive mixing with tau neutrinos through matter effects. The currently released 1-year data shows a mild (around 2σ) preference for non-zero sterile mixing, which overlaps with the favoured region for the sterile neutrino interpretation of the ANITA upward shower. Although the null results from CHORUS and NOMAD on νμ to ντ oscillations in vacuum disfavour the hint from the IceCube 1-year data, the relevant oscillation channel and underlying physics are different. At the 99% C.L. an upper bound is obtained instead that improves over the present Super-Kamiokande and DeepCore constraints in some parts of the parameter space. We also investigate the physics reach of the roughly 8 years of data that is already on tape as well as a forecast of 20 years data to probe the present hint or improve upon current constraints. Speaker: Julia Gehrlein (IFT-UAM) Material: • 14:20 Axion Production and Detection using Superconducting RF cavities 20' We propose a Light Shining Through Walls"-type experiment to search for axions with high-$Q$ superconducting RF cavities. Our setup uses a gapped toroid to confine a static magnetic field, with production and detection cavities positioned in regions of vanishing external field. We argue that the confining toroid does not significantly screen the axion-induced signal for frequencies of order the inverse toroid size. This allows both cavities to be superconducting with quality factors $Q \sim 10^{10}$, thus significantly improving the sensitivity of the experiment. Such a search has the potential to probe axion-photon coupling $\sim 2 \times 10^{-11} ~\GeV^{-1}$, comparable to the future ALPS II. Speaker: Vijay Narayan (UC Berkeley) Material: • 14:00 - 15:00 Parallel Session on Collider Searches Location: Big Lecture Hall, Mathematics • 14:00 Heavy Neutral Leptons at the LHC 20' Heavy neutral leptons or massive sterile neutrinos are present in many well-motivated beyond the Standard Model theories, sometimes being accesible at present colliders. Our aim is to discuss the potential of the LHC to produce and discover this kind of new particles when their masses are in the GeV range. Moreover, and depending on their masses and couplings, they can be long-lived and lead to signatures with displaced vertices, or decay promptly within the detector. Speaker: Dr. Xabier Marcano (LPT Orsay) Material: • 14:20 Heavy neutral fermions at the high-luminosity LHC 20' Long-lived light particles (LLLPs) appear in many extensions of the standard model. LLLPs are usually motivated by the observed small neutrino masses, by dark matter or both. Typical examples for fermionic LLLPs (a.k.a. heavy neutral fermions, HNFs) are sterile neutrinos or the lightest neutralino in R-parity violating supersymmetry. The high luminosity LHC is expected to deliver up to 3/ab of data. Searches for LLLPs in dedicated experiments at the LHC could then probe the parameter space of LLLP models with unprecedented sensitivity. Here, we compare the prospects of several recent experimental proposals, FASER, CODEX-b and MATHUSLA, to search for HNFs and discuss their relative merits. Speaker: Simon Zeren Wang (bctp, Bonn University) Material: • 14:40 B+L violation at colliders and new physics 20' Chiral electroweak anomalies predict fermion interactions that violate baryon (B) and lepton number (L), and can be dressed with large numbers of Higgs and gauge bosons. The estimation of the total B+L violating rate from an initial two-particle state --potentially observable at colliders-- has been the subject of an intense discussion, mainly centered on the resummation of boson emission, which is believed to contribute to the cross-section with an exponential function of the energy, yet with an exponent (the "holy-grail" function) which is not fully known in the energy range of interest. Focusing instead on the effect of fermions beyond the Standard-Model (SM) in the polynomial contributions to the rate, it is shown that the latter can be enhanced by several orders of magnitude with respect to the SM result, for high centre-of-mass energies and light enough masses. Further calculations hint at a simple dependence of the holy grail function on the heavy fermion masses. Thus, if anomalous B+L violating interactions are ever detected at high-energy colliders, they could be associated with new physics. Speaker: Carlos Tamarit (Technische Universität München) Material: • 14:00 - 15:00 Parallel Session on Inflation Location: Small Lecture Hall, Mathematics • 14:00 Hill-climbing inflation and gravitational reheating 20' I will present a model of a hill-climbing inflation based on a scalar-tensor theory. I will show how the reheating can be generated by gravitational particle production and I will present the consequences of the gravitational reheating on the thermal history of the Universe and primordial gravitational waves. Speaker: Dr. Michal Artymowski (Jagiellonian University) Material: • 14:40 Electroweak Phenomenology of Higgs Inflation in the NMSSM 20' We study the phenomenology of a Next-to-Minimal Supersymmmetric Standard Model (NMSSM) that can achieve early universe inflation. The Higgs sector of such a model clearly differs from the usual NMSSM and can provide a smoking gun of inflation at low energies. Speaker: Dr. Wolfgang Gregor Hollik (DESY) Material: • 15:00 - 15:30 Coffee Break ( Foyer ) • 15:30 - 16:30 Parallel Session on Higgs Vacuum and RGE Location: bctp, Seminar Room 1 • 15:30 Higgs domain walls in the Standard Model and its extensions 20' The study of the renormalization group improved effective potential of the Standard Model has revealed existence of a local maximum at field strengths of the order of 10^10 GeV. If the Standard Model is valid for very high energy scales, then the possibility of the production of cosmological domain walls in the early Universe occurs. We investigated dynamics of networks of domain walls using lattice simulations. We studied domain walls in the extensions of the Standard Model using the formalism of the Effective Field Theory. The energy spectrum of gravitational waves emitted from Higgs domain walls was determined. Speaker: Tomasz Krajewski (University of Warsaw) Material: • 15:50 NLO Matching Conditions in Extended Higgs Sectors 20' The absence of new physics in current LHC searches leads to increasing interest in a variety of non-minimal extensions of the Standard Model (SM). Also the scale of new physics in widely studied models is pushed to higher energies. Automation tools such as SARAH allow for comprehensive studies of a wider class of models beyond the Standard Model (SM). Not only the tree-level values but also NLO and NNLO corrections to mass spectra can be studied within a reasonable amount of time. However, large mass gaps can lead to problematic large logarithms increasing the uncertainty in fixed order calculations. The use of effective field theories (EFTs) is a common tool to resum these large logs. Thus, precise matching conditions between EFTs and UV complete -or intermediate- theories are needed. After a brief introduction into the running and matching procedures applied by SARAH/SPheno I will discuss various aspects important for a generic matching of scalar sectors at NLO. The connection of systematic cancellations of infrared divergences and contributions from mixed loops containing heavy and light fields in matching conditions is shown. Example cases for EFT Higgs mass calculations within heavy SUSY are presented as well. Speaker: Martin Gabelmann (Karlsruhe Institute of Technology) Material: • 16:10 Regularization without a renormalization scale 20' I will describe a version of the dimensional regularization of a classically scale invariant theory, motivated by the requirement to preserve scale invariance at the level of loop corrections. The theory is embedded in a nonrenormalizable Lagrangian, where both the dimensionful regulator \mu and suppression scale of higher-dimensional interactions are interpreted as a vev of a new dynamical scalar field that mixes with the Higgs. The method is applied to an SM-like theory, where the electroweak symmetry and the scale symmetry are broken spontaneously together. The shape of the scalar effective potential and interpretation of the high energy Higgs vacuum are modified. Based on: arXiv:1608.05336, arXiv:1612.09120. Speaker: Pawel Olszewski (University of Warsaw) Material: • 15:30 - 16:30 Parallel Session on Composite Higgs Location: Lecture Hall 1, PI • 15:30 Partially composite Goldstone Higgs 20' Recently there has been revival of interest in partially composite Higgs models, where electroweak symmetry breaking is dynamically induced, and the Higgs boson is a mixture of a composite and an elementary state. While the Goldstone nature of the composite sector explains the lightness of the Higgs, the elementary scalars provide an UV completion for the SM fermion mass generation via traditional Yukawa interactions. However, in this type of models, requiring vacuum stability up to the compositeness scale already imposes relevant constraints. We study the interplay of the these constraints and the LHC results, and we find that in the minimal realisation, a small part of parameter space around the classically conformal limit is stable up to the Planck scale, but this same region is however already disfavored by LHC data. We also consider minimal extensions to alleviate these constraints. Based on PRD 96 (2017) 095012 and JHEP 1810 (2018) 051. Speaker: Dr. Tommi Alanne (MPIK Heidelberg) Material: • 15:50 Asymptotically Free Supersymmetric Twin Higgs 20' Twin Higgs (TH) models explain the absence of new colored particles responsible for natural electroweak symmetry breaking. All known ultraviolet completions of TH models require some non-perturbative dynamics below the Planck scale. A new type of supersymmetric Twin Higgs model is presented in which the TH mechanism is introduced by a new asymptotically free gauge symmetry. The model features natural electroweak symmetry breaking for squarks and gluino heavier than 2 TeV even if supersymmetry breaking is mediated around the Planck scale, and has interesting flavor phenomenology including the top quark decay into the Higgs and the up quark which may be discovered at the LHC. The talk will be primarly based on arXiv:1707.09071 and arXiv:1711.11040. Speaker: Marcin Badziak (University of Warsaw) Material: • 16:10 Composite resonances of fundamental Composite Higgs models 20' Composite Higgs models are among the most promising alternatives to solve the hierarchy problem and provide a dynamical mechanism to break the electroweak symmetry. In this talk I will discuss some predictions of specific UV realizations of such scenario. I will analyze in particular the phenomenology of vector and scalar states in a SU(4)/Sp(4) symmetry breaking pattern, using lattice results as well as unitarity and analyticity arguments to constrain the parameters of the theory. I will also discuss the phenomenology of vector and scalar states of a SU(4) gauge theory with symmetry breaking SU(5)/SO(5) and their interplay with a top partner responsible for the top mass via the partial compositeness mechanism, which is also present in the theory. The talk will be based on arXiv:1605.01363, arXiv:1705.02787 and ongoing work. Speaker: Dr. Diogo Buarque Franzosi (II Physics Institute, Goettingen U.) Material: • 15:30 - 16:30 Parallel Session on Collider Searches Location: Big Lecture Hall, Mathematics • 15:30 Searching for new physics in vector boson scattering at the LHC 20' The electroweak symmetry breaking (EWSB) sector still keeps some mysteries under the sleeve. Questions such as what is the dynamical origin of EWSB, what is the true nature of the Higgs boson or are the properties of this particle the ones predicted in the SM remain unanswered. The LHC is our tool to unveil these mysteries and vector boson scattering (VBS) is the perfect window to access them. In this work we perform a model independent analysis of the phenomenology of VBS at the LHC and give predictions for the sensitivity to possible new physics scenarios in the EWSB sector. Speaker: Claudia Garcia-Garcia (IFT-UAM/CSIC) Material: • 15:50 Top, Electroweak & Higgs sector processes in the Standard Model EFT at NLO in QCD 20' The Standard Model Effective Field Theory (SMEFT) provides a consistent formalism to parametrise the effects of heavy new physics appearing as deviations from SM expectations. One of the key goals of the LHC legacy will be to cover this parameter space as much as possible and contribute to a global fit of the EFT parameter space and, hopefully, provide hints on the nature of physics beyond the SM. To this end, precise predictions for all observables of interest in the SMEFT are required, both at fixed order and for Monte Carlo event generators interfaced with parton showers. I will present recent developments towards a first model implementation of operators in the leading Minimal Flavor Violation assumption, showcasing some results at next-to-leading order in QCD for single top production on its own and in association with a Higgs or Z boson at the LHC. The issue of EFT validity will be discussed alongside some LHC sensitivity studies and future prospects for the High Luminosity run. Speaker: Ken Mimasu (Université catholique de Louvain) Material: • 16:10 Consistent Searches for SMEFT Effects in Non-Resonant Dijet Events 20' We investigate the bounds which can be placed on generic new-physics contributions to dijet production at the LHC using the framework of the Standard Model Effective Field Theory, deriving the first consistently-treated EFT bounds from non-resonant high-energy data. In order to reach consistent, model-independent EFT conclusions, it is necessary truncate the EFT effects consistently at the first non-trivial order in the expansion in inverse powers of the new-physics scale and to include the possibility of multiple operators simultaneously contributing to the observables, neither of which has been done in previous searches of this nature. Furthermore, it is important to give consistent error estimates for the theoretical predictions of the signal model, particularly in the region of phase space where the probed energy is approaching the cutoff scale of the EFT. There are two linear combinations of operators which contribute to dijet production in the SMEFT with distinct angular behavior; we identify those linear combinations and determine the ability of LHC searches to constrain them simultaneously. Speaker: Stefan Alte (JGU Mainz) Material: • 15:30 - 16:30 Parallel Session on Light Hidden Sector Location: Small Lecture Hall, Mathematics • 15:30 Probing Light Hidden Sectors with Pulsar Timing Arrays 20' In this talk I will present the projected sensitivities of upcoming pulsar timing arrays (PTAs) to gravitational waves produced by first-order phase transitions. In contrast to ground- or space-based detectors these experiments are sensitive to phase transitions that occur around keV to MeV energy scales. Consequently, they are excellent probes of many types of light new physics contributing to non-standard cosmology. As an example, I will focus on a sequestered dark-photon model paying particular attention to the complementarity between standard cosmological probes and gravitational wave signals in PTAs. Speaker: Toby Opferkuch (JGU Mainz) Material: • 15:50 Constraining Ultralight Scalars with Neutron Star Superradiance 20' We demonstrate that rotational superradiance can be efficient in millisecond pulsars. Measurements from the two fastest known pulsars PSR J1748-2446ad and PSR B1937+21 can place bounds on bosons with masses below 10^-11 eV. The bounds are maximally good at masses corresponding to the rotation rate of the star, where scalar interactions that mediate forces ∼ 10^7 times weaker than gravity are ruled out, exceeding existing fifth force constraints by 4 orders of magnitude. For certain neutron star equations of state, these measurements also constrain the QCD axion with decay constant around ∼ 10^19 GeV, ruling out axions with masses between 5*10^-13 and 3*10^-12 eV. The observed absence of pulsars above ∼ 700 Hz despite the ability of the neutron star equation of state to support frequencies as high as ∼ 1500 Hz could be due to the superradiant damping of the stellar rotation as a result of its coupling to a new particle of mass ∼ 1500-3000 Hz with Yukawa couplings to nucleons. Although similar bounds have been placed by black hole superradiance, we note these bounds are strong functions of the (difficult to measure) black hole rotation rate, and thus the present bounds benefit from the extreme reliability of pulsar period measurements. Speaker: Paul Riggins (University of California at Berkeley) Material: • 16:10 The Higgs decay into two photons in the Standard Model Effective Field theory 20' Assuming that new physics effects are parametrized by the Standard-Model Effective Field Theory (SMEFT) written in a complete basis of up to dimension-6 operators, we calculate the CP-conserving one-loop amplitude for the decay h->gamma+gamma in general R_xi-gauges. We use this gauge invariant amplitude and recent LHC data to check upon sensitivity to various Wilson coefficients entering from a more complete theory at the matching energy scale. We present a closed expression for the ratio \mathcal{R}_{h->gamma+gamma}, of the Beyond the SM versus the SM contributions as appeared in LHC h->gamma+gamma searches. With mild assumptions, we point out a set of possibilities for a field theory content at higher energies which may generate sizeable corrections in h->gamma+gamma amplitude. Speaker: Dr. Athanasios Dedes (University of Ioannina) Material: • Wednesday, May 23, 2018 • 08:00 - 09:00 Registration Location: Zeichensaal (1st floor) • 09:00 - 10:00 Review Talk Convener: Kiwoon Choi (CTPU/IBS) • 09:00 Flavour anomalies: phenomenology and BSM interpretations 1h0' ( Big Lecture Hall, Mathematics ) Several observables in flavour physics deviate from their Standard-Model (SM) expectation. The most prominent ones are related to the decay $b\to s \mu^+ \mu^-$, with a combined statistical significance well above 4 standard deviations. Similarly, branching fractions of $b \to c \tau \nu$ decays and the direct CP asymmetry in $K\to \pi \pi$ decays exceed the SM expectation. I'll discuss theoretical interpretations of these anomalies and highlight which future measurements will help to confirm or refute these anomalies and clarify the underlying theory. Speaker: Prof. Ulrich Nierste (Karlsruhe Institute of Technology) Material: • 10:00 - 10:30 Plenary Session Convener: Kiwoon Choi (CTPU/IBS) • 10:00 New physics in b->c cbar s transitions 30' b->c cbar s transitions proceed at the tree-level in the weak interactions in the Standard Model, and for this reason are typically ignored in the search for new physics. I will show that, in fact, even small BSM contributions can give observables effects in B-meson lifetime observables, CP-violation measurements, as well as rare and radiative decays. This can provide useful constraints on or possible signatures of new physics, and may even play a role in some of the anomalies seen in rare semileptonic B-decay. Speaker: Sebastian Jaeger (University of Sussex) Material: • 10:30 - 11:00 Coffee Break ( Foyer ) • 11:00 - 12:30 Plenary Session Convener: Michal Malinsky • 11:00 Deviations in Flavor Physics and Their Implications 30' Several intriguing deviations from Standard Model expectations have been observed in flavor-violating processes in the last few years, most notably hints for violation of lepton flavor universality, both in neutral-current B meson decays involving light leptons and in charged-current B meson decays involving tau leptons. I will give an overview of the status of these deviations and discuss possible interpretations within and beyond the Standard Model. Speaker: Dr. David Straub (TUM) Material: • 11:30 Status of eV-scale Sterile Neutrinos 30' I review the experimental indications in favor of short-baseline neutrino oscillations and I discuss their interpretation in the framework of 3+1 neutrino mixing with a sterile neutrino at the eV scale. I show that the recent results of the NEOS and DANSS reactor neutrino experiments give a new model-independent indication in favor of short-baseline electron antineutrino disappearance, confirming the reactor and Gallium anomalies. On the other hand, the recent result of the MINOS+ experiment disfavors the LSND anomaly. I also discuss the interpretation of the Daya Bay fuel evolution data. Speaker: Dr. Carlo Giunti (INFN) Material: • 12:00 Clockwork Inspired Models For Ultra-Light Scalars 30' Speaker: Stefan Pokorski (University of Warsaw) Material: • 12:30 - 14:00 Lunch Break • 14:00 - 15:40 Parallel Session on Collider Searches Location: Big Lecture Hall, Mathematics • 14:00 Novel measurements of anomalous triple gauge couplings for the LHC 20' A very important and promizing direction of reasearch is finding better ways of testing Standard Model and New Physics using Effective Field Theory approach and exploiting precision measurements. In this framework, Electroweak Triple Gauge Couplings play a special role, particularly within diboson production processes. However, there is often a cancellation in the inclusive measurements of the formally leading contribution to the interference between the SM amplitude and the contribution of irrelevant operators in the EFT. In this talk (based on JHEP10 (2017) 027) I will show that this suppression can be overcome by considering various differential distributions, that significantly improve the sensitivity to BSM effects and the accuracy in their measurements. Speaker: Elena Venturini (SISSA) Material: • 14:20 Flavour Physics meets Heavy Higgs Searches 20' We point out that the stringent lower bounds on the masses of additional electrically neutral and charged Higgs bosons crucially depend on the flavour structure of their Yukawa interactions. We show that these bounds can easily be evaded by the introduction of flavour-changing neutral currents in the Higgs sector. As an illustration, we study the phenomenology of a two Higgs doublet model with a Yukawa texture singling out the third family of quarks and leptons. We combine constraints from low-energy flavour physics measurements, LHC measurements of the 125 GeV Higgs boson rates, and LHC searches for new heavy Higgs bosons. We propose novel LHC searches that could be performed in the coming years to unravel the existence of these new Higgs bosons. Speaker: Dr. Ayan Paul (DESY) Material: • 14:40 The CP-Violating 2HDM in Light of a Strong First Order Electroweak Phase Transition and Implications for Higgs Pair Production 20' The generation of the observed matter-antimatter asymmetry in the universe through baryogenesis cannot be explained in the Standard Model. We therefore investigate the possibility of a strong first order phase transition in the CP-Violating 2-Higgs-Doublet Model (C2HDM) after imposing theoretical and experimental constraints. We study the type I and II C2HDM where one of the neutral Higgs bosons can be the Standard Model-like Higgs boson. Our results show that there is a strong interplay between the requirement of a strong phase transition and collider phenomenology with testable implications for searches at the LHC. We find additional preferred mass hierarchies compared to those of the CP-conserving 2HDM. We also use our results to investigate the interplay between a strong phase transition and the size of the trilinear Higgs self-couplings. Speaker: Prof. Margarete Mühlleitner (KIT) Material: • 15:00 Collider phenomenology of Hidden Valley mediators of spin 0 or 1/2 with semivisible jets 20' Many models of Beyond the Standard Model physics contain particles that are charged under both Standard Model and Hidden Valley gauge groups, yet very little effort has been put into establishing their experimental signatures. We provide a general overview of the collider phenomenology of spin 0 or 1/2 mediators with non-trivial gauge numbers under both the Standard Model and a single new confining group. Due to the possibility of many unconventional signatures, the focus is on direct production with semivisible jets. For the mediators to be able to decay, a global U(1) symmetry must be broken. This is best done by introducing a set of operators explicitly violating this symmetry. We find that there is only a finite number of such renormalizable operators and that the phenomenology can be classified into five distinct categories. We show that large regions of the parameter space are already excluded, while others are unconstrained by current search strategies. We also discuss how searches could be modified to better probe these unconstrained regions by exploiting special properties of semivisible jets. Speaker: Hugues Beauchesne (University of Sao Paulo) Material: • 15:20 Diboson Interference Resurrection at LHC via subjets 20' In the absence of new particles at the LHC, the effective field theory framework is appropriated to look for small deviations from the Standard Model (SM). In this case, the leading effect is encoded on the interference term between the SM and dimension-6 operators. However, at the high energy limit due to helicity selection rules, this term, for some operators, is suppressed which leads to an obstacle to probe new physics. In this work, we exploit the resurrection of this suppressed interference term unfolding the angular distribution in the diboson production. The unfolding is performed using jet substructure techniques, allowing us to build an observable sensible to the energy growth for those operators. Speaker: Rafael Aoude (JGU, Mainz) Material: • 14:00 - 15:40 Parallel Session on Neutrinos and Baryogenesis Location: Lecture Hall 1, PI • 14:00 Leptogenesis from small lepton number violation 20' A compelling mechanism for leptogenesis consists into CP-violating oscillations of mass-degenerate pairs of sterile neutrinos with masses lying at the GeV scale. This kind of setup finds a natural embedding in models like the Inverse See-Saw, where the mass degeneracy of these neutrino pairs is associated to a small violation of the lepton number. We present the resulst of a systematic study based on the numerical solution of suitable Boltzmann equation. Interestingly, Inverse See-Saw models can provide also a solution to the Dark Matter puzzle. We will then discuss the impact, on Dark Matter phenomenology, of the requirement of successful generation of the baryon asymmetry of the Universe. Speaker: Dr. Giorgio Arcadi (MPIfK, Heidelberg) Material: • 14:20 Low-Scale Leptogenesis in Extended Neutrino Mass Models 20' Standard thermal leptogenesis in the type-I seesaw model requires very heavy right-handed neutrinos (RHNs). This diminishes the prospects of directly testing this scenario in experiments and, thus, motivates efforts to construct models that generate the baryon asymmetry of the Universe at a lower RHN mass scale. In this talk, I will discuss two such alternative scenarios: First, I will revisit Ernest Ma's scotogenic model of radiative neutrino masses and present an analysis of "scotogenic leptogenesis for pedestrians". Then, I will turn to a singlet-extended version of the type-I seesaw model that introduces additional sources of CP violation as well as novel RHN decay channels. In both cases, successful leptogenesis can be achieved for a RHN mass scale of 10 TeV (or lower) and without any approximate degeneracy in the RHN mass spectrum. The two scenarios discussed in this talk therefore present viable and attractive alternatives to the well studied case of resonant leptogenesis. This talk is based on work in collaboration with Tommi Alanne, Thomas Hugle, and Moritz Platscher at MPIK Heidelberg. Speaker: Dr. Kai Schmitz (MPIK Heidelberg) Material: • 14:40 Strong thermal SO(10)-inspired leptogenesis in the light of recent results from long-baseline neutrino experiments 20' We confront recent experimental results on neutrino mixing parameters with the requirements from strong thermal $SO(10)$-inspired leptogenesis. There is a nice agreement with latest global analyses supporting $\sin\delta < 0$ and normal ordering at $\sim 95\%$ C.L. On the other hand, the more stringent experimental lower bound on the atmospheric mixing angle starts to corner this paradigm. Prompted and encouraged by this rapid experimental advance, we obtain a precise determination of the allowed region in the plane $\delta$ versus $\theta_{23}$. Though most of the solutions are found outside the $95\%$ C.L. experimental region, there is still a big allowed fraction that does not require a too fine-tuned choice of the Majorana phases so that the neutrinoless double beta decay effective neutrino mass allowed range is still $m_{ee}\simeq [10,30]\,{\rm meV}$. We also show how the constraints depend on some parameters such as the initial pre-existing $B-L$ asymmetry and the intermediate neutrino Dirac mass. Speaker: Dr. Marco Chianese (University of Southampton) Material: • 15:00 Predictions for RH Neutrinos from the Littlest Seesaw and Leptogenesis 20' The Littlest Seesaw model based on two right-handed neutrinos with constrained Yukawa couplings provides a highly predictive description of neutrino masses and PMNS mixing parameters. If realised at high energies there will be renormalisation group corrections to the low energy predictions, which depend on the right-handed neutrino masses. We perform a chi squared analysis to determine the right-handed neutrino masses from a four-parameter fit to the low energy neutrino parameters, also eventually taking into account leptogenesis. Speaker: Sam Rowley (University of Southampton) Material: • 15:20 Pendulum Leptogenesis 20' We propose a new non-thermal Leptogenesis mechanism that takes place during the reheating epoch, and utilizes the Ratchet mechanism. The interplay between the oscillation of the inflaton during reheating and a scalar lepton leads to a dynamical system that emulates the well-known forced pendulum. This is found to produce driven motion in the phase of the scalar lepton which leads to the generation of a non-zero lepton number density that is later redistributed to baryon number via sphaleron processes. This model successfully reproduces the observed baryon asymmetry, while simultaneously providing an origin for neutrino masses via the seesaw mechanism. Speaker: Neil Barrie (Kavli IPMU) Material: • 14:00 - 15:40 Parallel Session on Axion-DM Location: Small Lecture Hall, Mathematics • 14:00 QCD axion dark matter from parametric resonance 20' The QCD axion is a good dark matter candidate. The observed dark matter abundance can arise from misalignment or defect mechanisms, which generically require an axion decay constant fa ~ O(10^11) GeV (or higher). We introduce a new cosmological origin for axion dark matter, parametric resonance from oscillations of the Peccei-Quinn symmetry breaking field, that requires fa ~ (10^8 − 10^11) GeV. The axions may be warm enough to give deviations from cold dark matter in Large Scale Structure. Speaker: Dr. Keisuke Harigaya (UC Berkeley) Material: • 14:20 Axion strings and dark matter 20' The axion solution to the strong CP problem also provides a natural dark matter candidate. If the PQ symmetry has ever been restored after inflation, topological defects of the axion field would have formed and produced relic axions, whose abundance is in principle calculable. Using numerical simulations I will present a detailed study of the evolution of axion strings and the resulting spectrum of axion produced. The features found are important for a correct estimate of the total DM abundance. Work to appear in collaboration with A.Azatov, E.Hardy and G.Villadoro Speaker: Marco Gorghetto (SISSA) Material: • 14:40 Axions in a highly protected gauge symmetry model 20' I will present QCD axion or cosmological Goldstone bosons, such as ultra-light dark matter or quintessence, in a model with global symmetry highly protected by gauge symmetries. The global symmetry is accidental, obtained from an abelian quiver with scalar bifundamental fields. The Goldstone boson mass may receive explicit breaking contributions, but these are already much suppressed for a few quiver sites if the gauge charges of the scalars are appropriately chosen. The model can be obtained by latticizing an abelian 5d gauge theory on the linear dilaton background. Speaker: Quentin Bonnefoy (Centre de Physique Théorique, Ecole Polytechnique) Material: • 15:00 Colour Unified Dynamical Axion 20' A massless quark solution to the Strong CP problem is implemented with an extra confining gauge group that is unified with the SM QCD group in a CUT (Colour Unified Theory). A novel contribution to the axion mass (due to the small-size instantons at the CUT breaking scale) renders the dynamical axion heavy. This allows to protect the PQ symmetry from semi-classical gravitational effects that could spoil the solution to the strong CP problem. Speaker: Pablo Quílez Lasanta (UAM/CSIC) Material: • 15:20 Accidental Peccei-Quinn symmetry in a model of flavour 20' In Peccei-Quinn (PQ) solutions to the strong CP problem, a global $U(1)_{PQ}$ symmetry is typically added by hand. However, $U(1)_{PQ}$ need not be exact: it may arise from discrete symmetry, provided the PQ solution is protected to sufficient order. We present a rather complete model, based on Pati-Salam unification and $A_4$, wherein such discrete symmetries are the very same symmetries that govern quark and lepton flavour. The QCD axion itself resides within $A_4$ triplet flavons, which dictate fermion Yukawa structures; axion and flavour scales are firmly linked. Potentially viable avenues for testing the model include: (1) model fitting to quark and lepton mixing, (2) flavour-violating meson and lepton decays, (3) dark matter. Speaker: Fredrik Björkeroth (INFN-LNF) Material: • 14:00 - 15:20 Parallel Session Formal • 14:00 The SO(10) F-theory Landscape and Tensor-Matter Transitions 20' We systematically construct all torically resolved SO(10) theories theories with possible additional (discrete) Abelian gauge symmetries. We show that most of these models are connected via higgsing or superconformal matter transitions involving small E8 instantons. Motivated by this observations, we classify superconformal matter transitions in 6d SUGRA theories with Abelian and non-Abelian gauge groups. From consistency under gauge and gravity anomaly cancellation follow strong constraints on the involved matter representations and charges involved in the transition. Speaker: Dr. Paul-Konstantin Oehlmann (Virginia Tech) Material: • 14:20 Yukawa couplings from D-branes on non-factorisable tori 20' The classical part of Yukawa couplings from D6-branes is computed by summing over worldsheet instantons. On non-factorisable tori the worldsheet instantons obey selection rules, which depend on the underlying lattice of the torus. Quantum corrections to the couplings are obtained by T-dualizing the setup and computing the overlapping wavefunctions of three chiral matter fields. Speaker: Mr. Christoph Liyanage (Bethe Center for Theoretical Physics) Material: • 14:40 Anomaly Cancellation in Effective Supergravity Theories from the Heterotic String: Two Simple Examples 20' We use Pauli-Villars regularization to evaluate the conformal and chiral anomalies in the effective field theories from Z3 and Z7 compactifications of the heterotic string without Wilson lines. We show that parameters for Pauli-Villars chiral multiplets can be chosen in such a way that the anomaly is universal in the sense that its coefficient depends only on a single holomorphic function of the three diagonal moduli. It is therefore possible to cancel the anomaly by a generalization of the four-dimensional Green-Schwarz mechanism. In particular we are able to reproduce the results of a string calculation of the four-dimensional chiral anomaly for these two models. Speaker: Jacob Leedom (University of California, Berkeley) Material: • 15:00 AdS-phobia, the WGC, the Standard Model and Supersymmetry 20' It has been recently argued that the presence of any non-SUSY AdS stable vacua implies that a theory can not be consistently coupled to gravity (Ooguri-Vafa conjecture). In particular, this can be applied to the SM and its compactifications to 3 or 2 dimensions to obtain predictions on low energy physics. We will review some implications from these compactifications in general and present in detail the results from the compactification on $T^ 2/Z_4$. We find that the SM is not robust against the appearance of AdS vacua in 2D and hence would be by itself inconsistent with quantum gravity. On the contrary, if the SM is embedded at some scale into a SUSY version like the MSSM, the AdS vacua present in the non-SUSY case disappear or become unstable, pointing towards a preference for the SUSY extensions of the SM coming from the WGC. Moreover, in a $T^ 2/Z_4$ in which the orbifold action is embedded into a B-L symmetry the bounds on neutrino masses and the cosmological constant found in previous works can be recovered, suggesting that the MSSM should be extended with a $U(1)_{B-L}$ gauge group. Finally, in another families of vacua the spectrum of SUSY particles can be further constrained. Speaker: Alvaro Herraez (UAM-CSIC) Material: • 15:40 - 16:10 Coffee Break ( Foyer ) • 16:10 - 17:50 Parallel Session on Collider Searches Location: Big Lecture Hall, Mathematics • 16:10 The fate of the Littlest Higgs Model with T-parity under 13 TeV LHC Data 20' We scrutinize the allowed parameter space of Little Higgs models with the concrete symmetry of T-parity by providing comprehensive LHC analyses of all relevant production channels of heavy vectors, top partners, heavy quarks and heavy leptons and all phenomenologically relevant decay channels. Constraints on the model will be derived from the signatures of jets and missing energy or leptons and missing energy by using the collider phenomenology tool CheckMATE which exploits all available LHC BSM searches at center-of-mass energies of 8 and 13 TeV. Besides the symmetric case, we also study the case of T-parity violation. Speaker: Mr. Daniel Dercks (University of Hamburg) Material: • 16:30 Higgs data does not rule out a sequential fourth generation 20' Contrary to common perception, we show that the current Higgs data does not eliminate the possibility of a sequential fourth generation that get their masses through the same Higgs mechanism as the first three generations. The inability to fix the sign of the bottom-quark Yukawa coupling from the available data plays a crucial role in accommodating a chiral fourth generation which is consistent with the bounds on the Higgs signal strengths. We show that effects of such a fourth generation can remain completely hidden not only in the production of the Higgs boson through gluon fusion but also to its subsequent decay to γγ and Zγ. This, however, is feasible only if the scalar sector of the standard model is extended. We also provide a practical example illustrating how our general prescription can be embedded in a realistic model. Speaker: Ipsita Saha (INFN, Roma 1) Material: • 16:50 Higgs decays into SM particles in the NMSSM 20' The decays of neutral Higgs bosons into Standard Model fermions and gauge-bosons in the Next-to-Minimal Supersymmetric Standard Model (NMSSM) are investigated at the full one-loop order with some higher-order QCD corrections. We take mixing effects for the external Higgs particles consistently into account. In the beginning, some formal aspects of our approach are discussed, before we compare our results to the predictions of existing public tools. Especially the decays of heavy Higgs-doublet states into gauge bosons are dominated by higher-order corrections; popular tree-level approximations fail. Finally, our framework is employed to investigate the properties of a light singlet-dominated state with a mass close to 95 GeV. We find scenarios which can interpret the fluctuations reported by LEP in the bb searches, and by CMS in the diphoton searches as true signals. Speaker: Dr. Sebastian Paßehr (LPTHE) Material: • 17:10 Common exotic decays of top partners 20' Many Standard Model extensions that address the hierarchy problem contain Dirac-fermion partners of the top quark, which are typically expected around the TeV scale. Searches for these vector-like quarks mostly focus on their decay into electroweak gauge bosons or Higgs plus a standard model quark. In this talk, backed by models of composite Higgs, we propose a set of simplified scenarios that include more exotic decay channels, which modify the search strategies and affect the bounds. Analysing several classes of underlying models, we show that exotic decays are the norm and commonly appear with large rates. All of these models contain light new scalars that couple to top partners with charge 5/3, 2/3, and −1/3. We identify the contributing particle content and novel top partner decays that occur most commonly, provide effective Lagrangians, benchmarks, and a brief discussion of phenomenological bounds and newly occurring final states. Speaker: Dr. Thomas Flacke (IBS CTPU) • 17:30 Problems with unstable particles in high energy processes 20' 3 topics appeared in the description of processes with unstable particles. 1. Plane wave formalism is insufficient for some realistic problems. The wave packets formalism become necessary. Illustration:The t--channel singularity in small angle scattering. 2. Perturbation series become invalid like in the case of infrared divergence in QED, but in more strong form. Illustration: The s--channel singularity near the threshold. 3. Theory with unstable fundamental particles don't allow conditions for construction of perturbation diagrammatic expansion. Signature: Gauge dependence in the description of observable processes in higher orders Speaker: Prof. Ilya Ginzburg (Sobolev Institute of Mathematics) • 16:10 - 17:50 Parallel Session on Astro-DM I Location: Lecture Hall 1, PI • 16:10 White Dwarfs as DM Detectors 20' Dark matter that is capable of sufficiently heating a local region in a white dwarf will trigger runaway fusion and ignite a type 1a supernova. We consider dark matter (DM) candidates that heat through the production of high-energy standard model (SM) particles, and show that such particles will efficiently thermalize the white dwarf medium and ignite supernovae. Based on the existence of long-lived white dwarfs and the observed supernovae rate, we put new constraints on ultra-heavy DM candidates $m_\chi \gtrsim 10^{16}~\text{GeV}$ which produce SM particles through annihilation, decay, and DM-SM scattering in the stellar medium. As a concrete example, we rule out supersymmetric Q-ball DM in parameter space complementary to terrestrial bounds. We put further constraints on DM that is captured by white dwarfs, considering the formation and self-gravitational collapse of a DM core. For asymmetric DM, such a core may form a black hole that ignites a supernovae via Hawking radiation, and for almost asymmetric'' DM with non-zero but sufficiently small annihilation cross section, the core may ignite the star via a burst of annihilation during gravitational collapse. This constrains much lighter candidates, $m_\chi \gtrsim 10^{7}~\text{GeV}$. It is also intriguing that the DM-induced ignition discussed in this work provide an alternative mechanism of triggering supernovae from sub-Chandrasekhar mass progenitors. Speaker: Ryan Janish (University of California, Berkeley) Material: • 16:30 BBN constraints on MeV-scale dark sectors: Sterile decays 20' We study constraints from Big Bang Nucleosynthesis on inert particles in a dark sector which contribute to the Hubble rate and therefore change the predictions of the primordial nuclear abundances. We pay special attention to the case of MeV-scale particles decaying into dark radiation, which are neither fully relativistic nor non-relativistic during all temperatures relevant to Big Bang Nucleosynthesis. As an application we discuss the implications of our general results for models of self-interacting dark matter with light mediators. Speaker: Marco Hufnagel (DESY Hamburg) Material: • 16:50 Boltzmann equation for relativistic species and Hot Dark Matter 20' The latest Planck CMB data seem to strongly constrain the Hot Dark Matter scenarios, although they stay in tension with direct Hubble constant measurements. One can expect that in near future the experimental uncertainly for the effective number of neutrino species N_{eff} will shrink vastly. Therefore, in order to estimate the allowed parameter space of models predicting a Hot Dark Matter component, it is crucial to calculate its relic density with high accuracy. In my talk I will exploit the Boltzmann equation in the form suitable for relativistic species in Weinberg’s Higgs portal model. I will also discuss how in similar scenarios different statistics of incoming/outgoing particles my influence the results. Work in collaboration with prof. Marek Olechowski. Speaker: Paweł Szczerbiak (University of Warsaw) Material: • 17:10 Dark Matter Production in an Early Matter Dominated Era 20' We investigate dark matter (DM) production in an early matter dominated era where a heavy long-lived particle decays to radiation and DM. In addition to DM annihilation into and thermal DM production from radiation, we include direct DM production from the decay of the long-lived particle. Speaker: Mr. Fazlollah Hajkarim (BCTP, University of Bonn) Material: • 17:30 Hunting light dark sectors in sub-GeV dark matter scenarios 20' Minimal scenarios with light (sub-GeV) dark matter are usually accompanied by a correspondingly light "dark sector". We will show that the presence of the latter usually leads to strong bounds from cosmology and bright detection prospects at fixed target experiments and colliders. Speaker: Mr. Luc Darmé (LPTHE) Material: • 16:10 - 17:30 Parallel Session on Flavor and EFTs Location: bctp, Seminar Room 1 • 16:10 An EFT approach to lepton anomalies 20' Beside neutrino masses, an assortment of precision observables probe new physics in the lepton sector. Experimental anomalies include a 3.5-sigma discrepancy in the anomalous magnetic moment of the muon and a 4-sigma indication of lepton universality violation in B-meson semi-leptonic decays. Effective field theory is a useful framework for studying observables sensitive to multi-TeV scales in a model-independent way. Using this approach, I discuss the interplay between different lepton observables, including possibilities of relating several of them simultaneously. I also comment on the links between EFT and specific UV completions. Speaker: Rupert Coy (Laboratoire Charles Coulomb, CNRS) Material: • 16:30 Future DUNE constraints on EFT 20' In the near future, fundamental interactions at high-energy scales may be most efficiently studied via precision measurements at low energies. In this talk I will discuss the possible impact of the DUNE neutrino experiment on constraining the SM Effective Field Theory. The unprecedented neutrino flux offers an opportunity to greatly improve the current limits via precision measurements of the trident production and neutrino scattering off electrons and nuclei in the DUNE near detector. We quantify the DUNE sensitivity to dimension-6 operators in the SMEFT Lagrangian, and find that in some cases operators suppressed by an O(30) TeV scale can be probed. We also compare the DUNE reach to that of future experiments involving atomic parity violation and polarization asymmetry in electron scattering, which are sensitive to an overlapping set of SMEFT parameters. Speaker: Giovanni Grilli di Cortona (University of Sao Paulo) Material: • 16:50 SO(3) family symmetry, the axion and the flavor puzzle 20' The understanding of flavor and the strong CP problem could be closely related. Motivated by the idea of Comprehensive Unification of elementary particle forces and families, we propose a minimal SO(3) flavor extension of the of the Standard Model which accounts for the observed fermion mass hierarchies, and provides a dynamical understanding of quark mixing and CP violation. Speaker: Mario Reig (Instituto de Física Corpuscular (IFIC)) Material: • 17:10 Froggatt-Nielsen mechanism in a model with 331-gauge group 20' The models with the gauge group $SU(3)_c\times SU(3)_L\times U(1)_X$ (331-models) have been advocated to explain why there are three fermion generations in Nature. As such they provide partial understanding of the flavour sector. The hierarchy of fermion masses in the Standard Model is another puzzle which remains without compelling explanation. In this talk I present a model that incorporates Froggatt-Nielsen mechanism into a 331-model in order to explain both fundamental problems. It turns out that no new additional scalar representations are needed to take care of this. The 331-models thus naturally include explanations to both the number of fermion generations and their mass hierarchy. This talk is based on arXiv:1706.09463[hep-ph]. Speaker: Niko Koivunen (University of Helsinki) Material: • 16:10 - 17:30 Parallel Session on Relaxion/Light bosons • 16:10 Cosmological Higgs relaxation without inflation 20' The relaxion models propose a new idea to explain the smallness of the Higgs mass. They rely on the scanning of the Higgs mass parameter by a new field, the relaxion, and a back-reaction mechanism that is triggered when Higgs vacuum expectation value has reached the size of the electroweak scale, making the relaxion evolution cease. In the usual relaxion model the scanning happens during an inflationary period. Here we explore the cosmological consequences if the relaxation happens independently of inflation. In this scenario, the stopping mechanism is provided by particle production friction. The parameter space is very different from the usual relaxion scenarios; for instance, the relaxion mass can be as high as O(100) TeV. Speaker: Nayara Fonseca (DESY) • 16:30 Relaxion Dark Matter 20' I discuss a scenario in which the relaxion field, whose evolution in the early universe is responsible for the smallness of EW scale compared to the cutoff of the theory, also constitutes the Dark Matter of the Universe. Speaker: Enrico Morgante (DESY) • 16:50 Backreaction in the dynamical relaxation 20' Dynamical relaxation in an interesting framework in which the scale of new physics is pushed far beyond the observable range without abandoning the principles of naturalness. A low value of the electroweak scale originates from a dynamical selection process that comes from an interaction between the Higgs boson and an axion-like particle (a relaxion). During the relaxation, the relaxion rolls down a potential hill scanning a range of Higgs masses in the process. Usually, it is assumed that this roll-down has no effects beyond a varying mass of the Higgs. In this talk, I will discuss a possibility of non-negligible side-effects of the relaxation and how their inclusion may spoil the mechanism, especially if additional dimensions are involved. Speaker: Adam Markiewicz (University of Warsaw) Material: • 17:10 Generalized Clockwork Theory 20' I show how generating hierarchy via the clockwork mechanism can be generalized in various directions including nonabelian global groups, supergravity and Susy breaking. Speaker: Dr. Ido Ben-Dayan (Ariel University) Material: • Thursday, May 24, 2018 • 08:00 - 09:00 Registration Location: Zeichensaal (1st floor) • 09:00 - 10:00 Review Talk Convener: Stefan Foerste • 09:00 Big Data and Machine Learning in String Phenomenology: An Overview 1h0' Recently, machine learning techniques have entered the field of string phenomenology, for example, in order to identify promising string models in the vast landscape of string theory vacua. We review different approaches and techniques, like 'Autoencoders', 'Reinforcement Learning' and 'Generative Adversarial Networks' and apply them to several physical questions. Speaker: Mr. Patrick Vaudrevange (TU München) Material: • 10:00 - 10:30 Plenary Session Convener: Stefan Foerste • 10:00 Quantum Gravity constraints on Particle Physics and Cosmology 30' Consistency with quantum gravity can have significant consequences on low energy physics. Interestingly, it seems that not every effective field theory can be consistently coupled to quantum gravity unless it satifies some additional consistency constraints. Recently substantial effort has been dedicated to determine these constraints in terms of black-hole physics and string theory. Some proposals, dubbed Quantum Gravity Conjectures, would imply thattheories with small gauge couplings or parametrically large scalar field variations are inconsistent with quantum gravity. This can have important phenomenological implications for Beyond Standard Model proposals and Inflationary Cosmology models. Furthermore, when applying one of this conjectures to compactifications of the Standard Model, we obtain a lower bound for the cosmological constant in terms of the neutrino masses. This can also be translated into an upper bound for the EW scale around the TeV range, bringing a new perspective into the issue of the EW hierarchy. Speaker: Ms. Irene Valenzuela (IFT UAM/CSIC) Material: • 10:30 - 11:00 Coffee Break ( Foyer ) • 11:00 - 12:30 Plenary Session Convener: Stuart Raby • 11:00 Light states in the distance: evidence for a Quantum Gravity Conjecture 30' It has been conjectured that in theories consistent with quantum gravity infinite distances in field space coincide with an infinite tower of states becoming massless exponentially fast. In this talk I present non-trivial evidence for this conjecture from the study of Type IIB string theory on Calabi-Yau threefolds. I will also comment on the observation that the behavior of the metric and gauge coupling function near infinite distance points can be recovered by integrating out the BPS states. This suggests that theses properties might be emergent. Speaker: Dr. Thomas Grimm (Utrecht University & Max Planck Institute for Physics) Material: • 11:30 Pole N-flation 30' A second order pole in the scalar kinetic term can lead to a class of inflation models with universal predictions referred to as pole inflation or α-attractors. While this kinetic structure is ubiquitous in supergravity effective field theories, realising a consistent UV complete model in e.g. string theory is a non-trivial task. For one, one expects quantum corrections arising in the vicinity of the pole which may spoil the typical attractor dynamics. As a conservative estimate of the range of validity of supergravity models of pole inflation we employ the weak gravity conjecture (WGC). We find that this constrains the accessible part of the inflationary plateau by limiting the decay constant of the axion partner. For the original single complex field models, the WGC does not even allow the inflaton to reach the inflationary plateau region. We propose addressing these problems by evoking the assistance of N scalar fields from the open string moduli. This improves radiative control by reducing the required range of each individual field. Furthermore, it relaxes the WGC bound allowing to inflate on the plateau, although remaining at finite distance from the pole. Finally, we outline steps towards an embedding of pole N-flation in type IIB string theory on fibred Calabi-Yau manifolds. Speaker: Alexander Westphal (DESY) Material: • 12:00 From stringy vacua with particle physics spectrum to the effective action - exemplified by a non-factorisable orientifold - 30' For most string vacua, only the chiral matter spectrum can be computed in terms of topological data of the compact extra dimensions, while already the vector-like spectrum remains elusive. If the compact extra dimensions consist of (orbifolds of) tori, however, the string quantisation can be explicitly performed and thus not only the full tower of massless and massive matter states can be computed but also exact results on the effective action beyond the leading SUGRA approximation can be derived. In particular, the one-loop corrections to the gauge couplings generically depend on the moduli of the compact extra dimensions, challenging the separation of gravity and QFT sectors at tree-level. In this talk, I will demonstrate these features in terms of a Pati-Salam model constructed on D-branes in a non-factorisable orientifold background. Speaker: Prof. Gabriele Honecker (Johannes-Gutenberg-Universität Mainz) Material: • 12:30 - 14:00 Lunch Break • 14:00 - 15:40 Parallel Session on DM Location: Big Lecture Hall, Mathematics • 14:00 Dark matter direct detection at one loop 20' The strong direct detection limits indicate that dark matter may not directly couple to quarks and consequently direct detection only occur at loop level. We study direct detection in the prototype example of an electroweak singlet dark matter fermion coupled to an extended dark sector composed of a new fermion and a new scalar. We also discuss the cases if either the new fermion or scalar is a SM particle. The results can be applied to many different scenarios. We specifically discuss its application to a radiative neutrino mass model. Speaker: Michael Schmidt Material: • 14:20 Dark matter direct detection with pseudoscalar mediators 20' Due to its highly suppressed cross section (fermionic) dark matter interacting with the Standard Model via pseudoscalar mediators is expected to be essentially unobservable in direct detection experiments. We consider both a simplified model and a more realistic model based on an extended two Higgs doublet model and compute the leading one-loop contribution to the effective dark matter- nucleon interaction. This higher order correction dominates the scattering rate completely and can naturally, i.e. for couplings of order one, lead to a direct detection cross section in the vicinity of the neutrino floor. Taking the observed relic density and constraints from low-energy observables into account we analyze the direct detection prospects in detail and find regions of parameter space that are within reach of upcoming direct detection experiments such as XENONnT, LZ, and DARWIN. Speaker: Stefan Vogl Material: • 14:40 Self-interacting dark matter with a stable vector mediator 20' Self-interactions of dark matter induced by a MeV-scale vector mediator $Z_\text{D}$ can naturally cure some shortcomings of the standard cold dark matter paradigm at small scales. However, if the vector particle decays into Standard Model states, such a scenario is robustly excluded by constraints from energy injection during CMB. We study to what extent this conclusion can be circumvented if $Z_\text{D}$ is stable and annihilates into lighter degrees of freedom, such that it contributes itself to the observed amount of dark matter in the Universe. Indeed, we find viable parts of the parameter space which lead to the desired self-interaction cross section while being compatible with all relevant bounds from CMB and BBN observations. Speaker: Mr. Sebastian Wild (TU München) Material: • 15:00 Residual annihilations of asymmetric DM 20' Dark matter coupled to light mediators has been invoked to resolve the putative discrepancies between collisionless cold DM and galactic structure observations. However, $\gamma$-ray searches and the CMB strongly constrain such scenarios. To ease the tension, we consider asymmetric DM. We show that, contrary to the common lore, detectable annihilations occur even for large asymmetries, and derive bounds from the CMB, $\gamma$-ray, neutrino and antiproton searches. We then identify the viable space for self-interacting DM. Direct detection does not exclude this scenario, but provides a way to test it. Speaker: Dr. Iason Baldes (DESY) Material: • 15:20 Dark Matter from the Top 20' We study simplified models of top-flavoured dark matter in the framework of Dark Minimal Flavour Violation. In this setup the coupling of the dark matter flavour triplet to SM quark triplets constitutes the only new source of flavour and CP violation. The parameter space of the model is restricted by LHC searches with missing energy final states, by neutral meson mixing data, by the observed dark matter relic abundance, and by the absence of signal in direct detection experiments. We consider all of these constraints in turn, studying their implications for the allowed parameter space. Especially interesting is the combination of all constraints, reveling a non-trivial interplay. Large parts of the parameter space are excluded, most significantly in light of future bounds from upcoming experiments. Speaker: Simon Kast Material: • 14:00 - 15:40 Parallel Session on Neutrino Mass Models Location: Lecture Hall 1, PI • 14:00 Neutrino masses from Planck-scale lepton number breaking 20' We consider an extension of the Standard Model by right-handed neutrinos and we argue that, under plausible assumptions, a neutrino mass of O(0.1) eV is naturally generated by the breaking of the lepton number at the Planck scale, possibly by gravitational effects, without the necessity of introducing new mass scales in the model. Some implications of this framework are also briefly discussed. Speaker: Dr. Takashi Toma (Technische Universität München) Material: • 14:20 Dirac Neutrinos and its Many Surprising Connections 20' More than eighty years after they were first proposed, neutrinos still remain an enigma. Although they are an integral part of Standard Model but still we know very little about them. In particular, the Dirac or Majorana nature of neutrinos is still unclear with no clear indication of their nature. However, for a long time, theoretical particle physicists believed that neutrinos must be Majorana in nature and several elegant mass generation mechanisms have been proposed for Majorana neutrinos. In this talk, I will discuss the multitude of ways in which naturally small Dirac neutrino masses can be generated. I will also discuss the various interesting and sometimes surprising connections between Dirac nature of neutrinos and Dark Matter stability, proton decay etc. Speaker: Rahul Srivastava Material: • 14:40 Seesaw roadmap to neutrino masses 20' We describe the many pathways to generate Majorana and Dirac neutrino mass through generalized dimension-5 operators a la Weinberg. The presence of new scalars beyond the Standard Model Higgs doublet implies new possible field contractions, which are required in the case of Dirac neutrinos. We also notice that, in the Dirac neutrino case, the extra symmetries needed to ensure the Dirac nature of neutrinos can also be made responsible for stability of dark matter. • 15:00 WIMP dark matter in a Two-loop Dirac neutrino mass mechanism 20' Despite great efforts over several decades, neutrinoless double beta decay has not yet been detected and neutrinos could be Dirac particles. In this talk we present a “scotogenic” mechanism relating small neutrino mass and cosmological dark matter. Neutrinos are Dirac fermions with masses arising only in two-loop order through the sector responsible for dark matter. A global spontaneously broken U(1) symmetry leads to a physical Diracon that induces invisible Higgs decays which add up to the Higgs to dark matter mode. This enhances sensitivities to spin-independent WIMP dark matter search below mh/2 . • 15:20 Deviations of exact neutrino textures using radiative neutrino masses 20' The Weinberg operator allows for the construction of radiative Majorana neutrino masses. In this letter, it will be shown that it is possible to construct a one-loop diagram that will be the principal component of the neutrino mass matrix and that will have an exact mixing matrix with θ13=0. The addition of a two-loop diagram, which is naturally suppressed, allows the creation of the correct perturbations that will give a neutrino mixing matrix with entries inside experimental constrains, including the possibility of large CP Dirac phases. Speaker: Daniel Wegman Material: • 14:00 - 15:40 Parallel Session on Unified Models Location: Small Lecture Hall, Mathematics • 14:00 Axion Predicitons in SO(10)xU(1) GUT models 20' Non-supersymmetric Grand Unified SO(10) × U(1)PQ models have all the ingredients to solve several fundamental problems of particle physics and cosmology — neutrino masses and mixing, baryogenesis, the non-observation of strong CP violation, dark matter, inflation — in one stroke. The axion — the pseudo Nambu-Goldstone boson arising from the spontaneous breaking of the U(1)PQ Peccei-Quinn symmetry — is the prime dark matter candidate in this setup. We determine the axion mass and the low energy couplings of the axion to the Standard Model particles, in terms of the relevant gauge symmetry breaking scales. We work out the constraints imposed on the latter by gauge coupling unification. We discuss the cosmological and phenomenological implications. Speaker: Anne Ernst (Desy) Material: • 14:20 Witten's loop in the flipped SU(5) revisited 20' The flipped SU(5) unification represents one of the rare cases in which the Witten's loop mechanism for the radiative RH neutrino mass generation may be implemented in a potentially realistic manner. It was shown recently that, in a large part of the parameter space, the tight flavour structure of the minimal model of this kind yields relatively strong constraints on the principal smoking-gun observables of this setting, namely, the partial proton decay widths with electrons and muons in the final state. We shall present a new study of the absolute size of the Witten's effect in the minimal flipped SU(5) scenario and comment on its impact on the relevant physics. Speaker: Michal Malinsky Material: • 14:40 Planck-scale induced uncertainties in proton lifetime estimates and flavour structure of GUTs 20' Grand Unified Theories predict the proton to be unstable and if a particular model is considered, the partial proton decay widths can be calculated. Most often, however, a renormalizable setting is considered and the effective operators possibly describing the physics at the Planck scale are not taken into account. It is well known that one of such effective operators involving the unification gauge field strength tensor may shift the position of the unification scale and, hence, cause a considerable error in the proton lifetime estimates. On the other hand, we are studying the Planck-suppressed operators which contribute to the flavour structure of the theory and have an impact on the determination of the individual partial proton decay widths. Speaker: Helena Kolesova Material: • 15:00 SO(10) inspired Z′ models at the LHC 20' We study Z′ models arising from SO(10)SO(10), focussing in particular on the gauge group of SU(3)C×SU(2)L×U(1)R×U(1)B−L, broken at the TeV scale to the Standard Model gauge group. This gauge group is well motivated from SO(10) breaking and allows neutrino mass via the linear seesaw mechanism. Assuming supersymmetry, we consider single step gauge unification to predict the gauge couplings, then consider the detection and characterisation prospects of the resulting Z′ at the LHC by studying its possible decay modes into di-leptons, including the expected forward-backward asymmetry, as well as into Higgs bosons, also comparing these predictions to other related Z′ scenarios such as the well studied U(1)B−L and U(1)χ models. Speaker: Mr. Simon King (University of Southampton) Material: • 15:20 Pseudo-Goldstone scalars in the minimal SO(10) Higgs model 20' The minimal renormalizable SO(10) Higgs model in which the unified symmetry is broken down by the adjoint representation is known to suffer from tachyonic instabilities along all potentially realistic symmetry-breaking chains. Few years ago, this issue has been identified as a mere relic of the tree level calculations and the radiative corrections to the masses of the pair of the “most dangerous” pseudo-Goldstone scalars in the model’s spectrum have been computed. Remarkably enough, it turns out that - in its minimal potentially realistic renormalizable realization - there is third pseudo-Goldstone scalar (a full SM singlet) suffering from the same disease that, until recently, happened to escape the community’s attention. In this talk we will provide a short account of the calculation of the one-loop correction to its mass and comment briefly on the prospects of an implementation of this scheme within a fully realistic grand unified scenario. Speaker: Katerina Jarkovska Material: • 14:00 - 15:20 Parallel Session on Gravity and Quantum Effects • 14:00 An asymptotically safe link between the Planck scale and the eletroweak scale 20' I will discuss the asymptotic-safety paradigm as a framework for model of quantum gravity and matter at and beyond the Planck scale. I will highlight indications for the theoretical viability of this scenario and discuss how in this setting, Planck-scale physics could lead to testable consequences at the electroweak scale. In particular, the asymptotic-safety paradigm could have a higher predictive power than the Standard Model, and thus the values of free parameters of the Standard Model, such as, e.g., the values of gauge couplings, the Higgs mass, as well as Yukawa couplings, could be fixed uniquely by demanding asymptotic safety at the Planck scale. Speaker: Astrid Eichhorn Material: • 14:20 The status of Horava gravity 20' Horava gravity is a proposal for a non-relativistic UV completion of General Relativity, which has been proven to be renormalizable and (in 2+1 dimensions) asymptotically free while propagating non-trivial degrees of freedom. We review the current status of the theory as a plausible proposal for a description of quantum gravity and describe the main challenges for its success. Speaker: Mario Herrero Valea Material: • 14:40 Quark masses from Planck-scale physics 20' Asymptotic safety provides a framework for a UV-complete model of quantum gravity and matter. Within this framework, Renormalization Group flows allow to connect Planck-scale physics to the electroweak scale. I will present indications that asymptotically safe quantum gravity uniquely determines the top Yukawa coupling, resulting in a “retrodiction” of the top-quark mass close to its experimental value. Taking into account the non-trivial role of the U(1) hypercharge in the asymptotically safe setting, could moreover generate a mass difference between charged quarks and ''retrodict'' the bottom-quark mass. Speaker: Aaron Held Material: • 15:40 - 16:10 Coffee Break ( Foyer ) • 16:10 - 17:30 Parallel Session on DM at Colliders Location: Big Lecture Hall, Mathematics • 16:10 Effective Field Theory for the LHC and Dark Matter 20' In this talk, I will discuss the EFT approach to LHC physics and Dark Matter searches regarding its validity and prospects to gain information on the underlying UV physics. In particular in the first case, correlations will be pointed out that allow to directly access the mass of new particles beyond collider reach, while in the second case, a new effective description will be presented that allows to consistently combine results from direct-detection experiments and LHC searches for missing energy. Speaker: Dr. Florian Goertz (MPIK) Material: • 16:30 Minimal Dark Matter at Collider 20' A massive particle charged under the electroweak gauge symmetry is one of the best candidate of the dark matter and called "Minimal dark matter." In this talk, I will discuss new strategies for such a dark matter hunting at collider, using precision measurements of the standard processes. Speaker: Satoshi Shirai (Kavli IPMU) Material: • 16:50 Top-philic dark matter within and beyond the WIMP paradigm 20' We present a comprehensive analysis of top-philic Majorana dark matter that interacts via a colored t-channel mediator. Despite the simplicity of the model - introducing three parameters only - it provides an extremely rich phenomenology allowing to accommodate the relic density for a large range of coupling strengths spanning over six orders of magnitude. This model features all 'exceptional' mechanisms for dark matter freeze-out, including the recently discovered conversion-driven freeze-out mode, with interesting signatures of long-lived colored particles at colliders. We constrain the cosmologically allowed parameter space with current experimental limits from direct, indirect and collider searches, with special emphasis on light dark matter below the top mass. In particular, we explore the interplay between limits from Xenon1T, Fermi-LAT and AMS-02 as well as limits from stop, monojet and Higgs invisible decay searches at the LHC. We find that several blind spots for light dark matter evade current constraints. The region in parameter space where the relic density is set by the mechanism of conversion-driven freeze-out can be conclusively tested by R-hadron searches at the LHC with 300 fb−1. Speaker: Dr. Jan Heisig (RWTH Aachen University) Material: • 17:10 Interplay between collider and dark matter searches in composite Higgs models 20' Many composite Higgs models predict the existence of vector-like quarks with masses outside the reach of the LHC, e.g. mQ > 2 TeV, in particular if these models contain a dark matter candidate. In such models the mass of the new resonances is bounded from above to satisfy the constraint from the observed relic density. We therefore develop new strategies to search for vector-like quarks at a future 100 TeV collider and evaluate what masses and interactions can be probed. We find that masses as large as ∼6.4 (∼9) TeV can be tested if the fermionic resonances decay into Standard Model (dark matter) particles. We also discuss the complementarity of dark matter searches, showing that most of the parameter space can be closed. On balance, this study motivates further the consideration of a higher-energy hadron collider for a next generation of facilities Speaker: Dr. Mikael Chala (DESY) Material: • 16:10 - 17:30 Parallel Session on Astro-DM II Location: Lecture Hall 1, PI • 16:10 Lessons from the DAMPE data 20' The cosmic ray electron + positron flux measured recently by the DAMPE experiment up to 5 TeV has gained great attention. So far, the interpretations focuss on the spectral softening around 0.9 TeV and on a small fluctuation of one data point at 1.4 TeV. We want to point at three additional features that can be found in the data by separating the primary and the seconary compontent of the cosmic rays and discuss their implications. Speaker: Ms. Annika Reinert (BCTP, PI der Universität Bonn) Material: • 16:30 Radiation Injection as a Solution to the EDGES 21 cm Anomaly 20' The recently claimed anomaly in the measurement of the 21 cm hydrogen absorption signal by EDGES at redshift z = 17, if cosmological, requires the existence of new physics. The possible attempts to resolve the anomaly rely on either (i) cooling the hydrogen gas via new dark matter-hydrogen interactions or (ii) modifying the soft photon background beyond the standard CMB (suggested by the ARCADE 2 excess as well). We argue that solutions belonging to the first class are generally in tension with cosmological dark matter probes once simple dark sector models are considered. Therefore, we propose soft photon emission by light dark matter as a natural solution to the 21 cm anomaly. We find that the signal singles out a photophilic dark matter candidate characterised by an enhanced collective decay mechanism, such as axion mini-clusters. Speaker: Kristjan Kannike Material: • 16:50 Leptophilic dark matter from gauged lepton number: Phenomenology and gravitational wave signatures 20' In this work, we consider a model in which the SM is extended by a lepton number gauge group $U(1)_L$. The arising gauge anomalies are canceled by adding two sets of SM vector-like leptons. We further add a scalar field that spontaneously breaks $U(1)_L$. A residual global symmetry ensures the stability of the lightest additional lepton, thus providing a dark matter candidate. We investigate current and future constraints on the model from collider searches as well as dark matter experiments. We further study the lepton number breaking phase transition, particularly focusing on its potential to generate a stochastic gravitational wave background accessible to GW interferometry as a complementary way to probe the model. • 17:10 The inflaton portal to PeV-EeV Dark Matter 20' In this talk I will present a new paradigm for dark matter production in which the dark constituent of our Universe is in contact with the visible bath exclusively through the inflationary sector. I will show that experimental constraints on the inflationary energy scale and the dark matter relic density can be balanced by the use of a very minimal set up and that such a construction allows to constrain both inflation physics and dark matter phenomenology by various cosmological considerations. I will show that such a model can be therefore very predictive and lead to a dark matter mass range of order O(10-1000) PeV which could be probed by various experimental collaborations, depending on the interactions considered. Speaker: Dr. Lucien Heurtier (University of Arizona) Material: • 16:10 - 17:30 Parallel Session on EW Vacuum Stability Location: Small Lecture Hall, Mathematics • 16:10 Decay Rate of Electroweak Vacuum in the Standard Model and Beyond 20' We perform a precise calculation of the decay rate of the electroweak vacuum in the standard model as well as in models beyond the standard model. We use a recently developed technique to calculate the decay rate of a false vacuum, which provides a gauge invariant calculation of the decay rate at the one-loop level. We give a prescription to take into account the zero modes in association with translational, dilatational, and gauge symmetries. We calculate the decay rate per unit volume, γ, by using an analytic formula. The decay rate of the electroweak vacuum in the standard model is estimated to be log γ × Gyr Gpc^3 = −582. We also provide errors to γ due to the uncertainties of the Higgs mass, the top quark mass, the strong coupling constant and the choice of the renormalization scale. The analytic formula of the decay rate, as well as its fitting formula given in this paper, is also applicable to models that exhibit a classical scale invariance at a high energy scale. As an example, we consider extra fermions that couple to the standard model Higgs boson, and discuss their effects on the decay rate of the electroweak vacuum. Speaker: So Chigusa Material: • 16:30 Constraining BSM Scalar Sectors through Vacuum Stability 20' Since the LHC has not provided us with any hints towards new physics, it is ever more interesting to constrain BSM theories from purely theoretical considerations. Requiring that the electroweak vacuum in any BSM model is at least metastable, can lead to stringent constraints on the parameter space of the model. Many popular extensions of the SM, such as supersymmetry, feature greatly extended scalar sectors. In the resulting high dimensional scalar potential, vacuum decay can happen in many different field directions. Constraints from vacuum decay thus rely on finding all minima of multidimensional scalar potentials which is a nontrivial task even at tree-level. We present results on the vacuum stability in supersymmetric models from a new code aiming to provide an efficient and reliable check of vacuum stability for use in BSM parameter scans. Speaker: Mr. Jonas Wittbrodt (DESY) Material: • 16:50 RG improvement of multi-field potentials 20' In this talk a new method or renormalisation-group (RG) improvement of effective potentials in models with extended scalar sector is presented. With the use of this method, the RG improved potential can be expressed as the tree-level potential evaluated at a suitably chosen field-dependent scale. This follows from solving the RG equation for the effective potential with a suitably chosen boundary condition. In this talk I introduce the method, discuss its advantages (e.g. possibility to compute vacuum expectation values of the scalar fields which are substantially less scale dependent than the ones following from perturbative one-loop potential), applications (e.g. study of stability of the potential beyond tree level which is impossible without RG improvement) and shortcomings. The presentation is based on JHEP 1803 (2018) 014. Speaker: Bogumila Swiezewska Material: • 17:10 Electroweak vacuum stability from extended Higgs portal dark matter and type-I seesaw 20' We investigate the electroweak vacuum stability in presence of a scalar dark matter and neutrino mass model through type-I seesaw. The minimal Higgs portal dark matter framework is extended here with another scalar singlet field carrying non-zero vacuum expectation value. Our results reveal that inclusion of this extra scalar not only helps in achieving absolute vacuum stability (even with large neutrino Yukawa coupling) of the electroweak vacuum upto Planck scale, but also opens up the low mass window for a scalar dark matter (< 500 GeV) which otherwise was excluded from recent XENON 1T data. Speaker: Dr. ARUNANSU SIL (IIT Guwahati) Material: • 16:10 - 17:30 Parallel Session on SUSY and Naturalness Location: bctp, Seminar Room 1 • 16:10 Electroweak symmetry breaking by a neutral sector: Dynamical relaxation of the little 20' We propose a new dynamical relaxation mechanism based on a singlet extension of the MSSM. In this scenario, a small enough soft mass of the MSSM singlet is responsible for the electroweak symmetry breaking and the Higgs VEV of order 100 GeV, whereas the effects of a large soft mass parameter of the Higgs boson, −m_{hu}^2 are dynamically compensated by an MSSM singlet field. The smallness of the Higgs VEV can be protected by a hierarchy between the gravity and gauge mediated SUSY breaking scales and the smallness of the relevant Yukawa couplings. Since its VEV is adjusted by the VEV of the Higgs of order 100 GeV, the Z boson mass can remain light even if the stop mass is heavier than 10 or 20 TeV. A focus point of the singlet's soft mass parameter emerges around the stop decoupling scale, and so the various fine-tuning measures can be reduced to order 10. Speaker: Bumseok Kyae (Pusan National University) Material: • 16:30 Exploring Non-Holomorphic Soft Terms in the Framework of Gauge Mediated Supersymmetry Breaking 20' It is known that in the absence of a gauge singlet field, a specific class of supersymmetry (SUSY) breaking non-holomorphic (NH) terms can be soft breaking in nature so that they may be considered along with the Minimal Supersymmetric Standard Model (MSSM) and beyond. There have been studies related to these terms in minimal supergravity based models. Consideration of an F-type SUSY breaking scenario in the hidden sector with two chiral superfields however showed Planck scale suppression of such terms. In an unbiased point of view for the sources of SUSY breaking, the NH terms in a phenomenological MSSM (pMSSM) type of analysis showed a possibility of a large SUSY contribution to muon g−2, a reasonable amount of corrections to the Higgs boson mass and a drastic reduction of the electroweak fine-tuning for a higgsino dominated neutralino in some regions of parameter space. We investigate here the effects of the NH terms in a low scale SUSY breaking scenario. In our analysis with minimal gauge mediated supersymmetry breaking (mGMSB) we probe how far the results can be compared with the previous pMSSM plus NH terms based study. We particularly analyze the Higgs, stop and the electroweakino sectors focusing on a higgsino dominated neutralino and chargino, a feature typically different from what appears in mGMSB. The effect of a limited degree of RG evolutions and vanishing of the trilinear coupling terms at the messenger scale can be overcome by choosing a non-minimal GMSB scenario, such as one with a matter-messenger interaction. • 16:50 A role of SUSY before/around Planck 20' Considering NLSUSY-structure of space-time just inspired by nonlinear representation of SUSY(NLSUSY) and performing the ordinary geometric arguments of Einstein general relativity(GR) principle, we obtain NLSUSY invariant Einstein-Hilbert-type general relativity action(nonlinear supersymmetric general relativity (NLSUSYGR)) equipped with the cosmological term of the robust SUSY breaking encoded in space-time itself. NLSUSYGR would collapse spontaneously(Big Collapse) to ordinary Riemann space-time(Einstein gravity) and Nambu-Goldstone fermion(matter) for [superGL(4,R)/GL(4,R)]. We show in the simple model that SM and probably SUGRA as well can emerge in the true vacuum of NLSUSYGR as the effective theory composed of NG fermion, which bridge naturally the cosmology and the low energy particle physics and gives new insights into unsolved problems of cosmology and SM, which may explain naturally mysterious relations among them, e.g. the space-time dimension four, the dark energy density≃( neutrino mass)4 , the three-generations structure of quarks and leptons, the magnitude of the gauge coupling constant, etc. [Ref.] K. Shima, Plenary talk(lecture) at Conference on Cosmology, Gravitational Waves and Particles}, 6-10, January, 2017, NTU, Singapore.. Proceeding of CCGWP, ed. Harald Fritzsch, (World Scientific, Singapore, 2017), 301. Speaker: Prof. Kazunari Shima (Saitama Institute of Technology) Material: • 17:10 Bayesian Analysis and Naturalness in (Next-to-)Minimal SUSY Models 20' One of the key motivations for supersymmetric (SUSY) models is their ability to naturally stabilize the electroweak scale and so address the hierarchy problem. However, in the Minimal Supersymmetric Standard Model (MSSM) accommodating a 125 GeV Higgs boson appears to once again require a degree of fine-tuning. This has fueled interest in non-minimal SUSY models, such as the Next-to-MSSM (NMSSM), that raise the Higgs mass at tree-level, and so are claimed to be more natural. Such a comparison, when made on the basis of traditional fine-tuning measures, is somewhat futile, since the result heavily depends on the chosen definition of fine-tuning. We instead advocate for an approach in which the plausibility that a given model reproduces the electroweak scale is quantified using Bayesian statistics. We contrast popular fine-tuning measures with naturalness priors, which automatically arise in this approach, in the constrained MSSM and a semi-constrained NMSSM. We find that results obtained using naturalness priors agree qualitatively with traditional measures, while having a well-defined probabilistic interpretation. Our comparison shows that naturalness can be rigorously grounded in Bayesian statistics, and that naturalness priors provide valuable insight into the hierarchy problem. Speaker: Dylan Harries Material: • 18:30 - 22:00 Conference Dinner Location: Kleine Beethoven-Halle • Friday, May 25, 2018 • 08:00 - 09:00 Registration Location: Zeichensaal (1st floor) • 09:00 - 10:00 Review Talk Convener: Athanasios Dedes • 09:00 Looking beyond WIMP Dark Matter 1h0' I review theories of dark matter and their detection Speaker: Kathryn Zurek • 10:00 - 10:30 Plenary Session Convener: Athanasios Dedes • 10:00 Gravitational waves from first order phase transitions 30' I will discuss phase transitions at the TeV scale, in particular the electroweak one (in extensions of the standard model). I will review the current status of how gravitational waves are generated during the phase transition, and show how the resulting gravitational wave signal can be computed from key properties of the transition. Finally, I will discuss detection prospects at future interferometers, such as LISA. Speaker: Stephan Huber Material: • 10:30 - 11:00 Coffee Break ( Foyer ) • 11:00 - 12:30 Plenary Session Convener: Prof. Hans Peter Nilles (Univ. Bonn) • 11:00 Signatures of particle production during inflation 30' We will discuss several phenomenological signatures that can result from particle production during inflation, with a paricular attentions ot models of axion inflation. The signatures include large non-gaussianity and sourced gravitational waves at CMB scales, primordial black holes, and gravitational waves at interferometer scales Speaker: Marco Peloso Material: • 11:30 Dark Matter Searches with Charged Cosmic Rays 30' Weakly Interacting Massive Particles (WIMPs) have been the target of cosmic ray experiments for decades. AMS-02 is the first experiment which can realistically probe WIMP annihilation signals in the antiproton channel. Due to the tiny experimental errors, uncertainties in the astrophysical background have become the most limiting factor for the dark matter detection. I will use the combination of antiproton, boron to carbon and positron data in order to systematically reduce uncertainties related to the propagation of charged cosmic rays. In addition, I will use a wide collection of accelerator data to improve the calculation of the astrophysical antiproton source term. Finally, I will present a spectral search for dark annihilation in the AMS-02 antiproton data. I will also comment on prospects of dark matter detection with antinuclei. Speaker: Dr. Martin Winkler (Bonn University) Material: • 12:00 New signals from dark sectors 30' The astrophysical evidence for dark matter suggests that the standard model should be supplemented by a dark sector, containing a particle dark matter candidate. In my talk I will give give an overview of new experimental signatures of non-minimal dark sectors with new interactions, in particular: emerging jets from QCD-like dark sectors at colliders, flavour violating signatures of dark pions in fixed target experiments, and gravitational wave signatures from phase transitions in the dark sector. Speaker: Pedro Schwaller Material: • 12:30 - 14:00 Lunch Break • 14:00 - 16:00 Plenary Session Convener: Stefan Pokorski • 14:00 Saxion/Higgs Inflation and Axion Dark Matter 30' The saxion - the modulus of the Peccei-Quinn scalar - or a mixture of it with the modulus of the Higgs - represents a viable inflaton candidate, if one takes into account the possible non-minimal coupling of the PQ scalar to gravity. Remarkably, reheating in saxion/Higgs inflation inevitably restores the PQ symmetry and results therefore in a lower bound on the axion mass around 30 micro-eV. Otherwise, the amount of axion dark matter exceeds the observed amount of cold dark matter. This cosmological scenario can be decisively tested by the next generation of CMB and axion experiments, such as CMB-S4, MADMAX, and IAXO. Speaker: Andreas Ringwald (DESY) Material: • 14:30 Recent Progress in sub-GeV Dark Matter Direct Detection 30' Speaker: Tien-Tien Yu Material: • 15:00 Novel decay and scattering signatures of light dark matter 30' I will discuss experimental and observational signatures of dark matter with mass at and very well below the MeV scale. The first signature regards dark matter decay into dark radiation states. I will comment on the experimental detectability of dark radiation, and further show that 21 cm astronomy becomes a probe of very light dark radiation; an explanation of the EDGES result is offered. The second signature regards dark matter scattering on nuclei and electrons in direct detection experiments. I will show how smaller but irreducible signal components can be tapped to dramatically extend the low-mass reach of existing searches. The luminosity upgrade of the LHC (HL-LHC) is being actively pursued with the targeted integrated luminosity of 3 ab-1 by the year of 2037. There has been increasing interest to upgrade the energy of the LHC (HE-LHC) to about a c.m. energy of 27 TeV by doubling the magnetic field in the same tunnel, which is part of the effort of the European Strategy of Particle Physics led by CERN. I present a few well-motivated case studies to demonstrate the physics potential of the HE-LHC, in comparison with the HL-LHC expectations.
2019-01-18 18:54:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6403675675392151, "perplexity": 2131.1932306209915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660258.36/warc/CC-MAIN-20190118172438-20190118194438-00442.warc.gz"}
https://www.coursehero.com/sg/college-algebra/graphs-of-functions/
# Graphs of Functions A graph of a function is a visual representation of the function. An algebraic rule can be used to produce the graph of a function. The algebraic rule of the function is a way of describing a set of ordered pairs. The set of ordered pairs can also be represented visually by using a graph. For example: $f(x)=x+1$ It is used to represent the set of ordered pairs where the second coordinate is one more than the first coordinate. There are infinitely many pairs in this set, such as $(0,1),\,(1,2),\,(2,3)$, and $(3,4)$. If these points are plotted on a coordinate plane, the line through them represents all the possible ordered pairs in the function. Functions can be graphed using a variety of methods. Some methods are more suitable for certain types of functions than others. One method that can be slow, but works for any type of function, is plotting points. When plotting points to graph a function, it is important to plot enough points to represent the important information about the function. After plotting several points, fill in missing information as needed by plotting additional points. It is also helpful to have a general idea of the shape of the graph based on the type of function. Step-By-Step Example Graphing a Function by Plotting Points Sketch the graph of the function: $f(x)=x^2+3$ Step 1 Make a table of values by evaluating the function for several values of $x$. $x$ $f(x)=x^{2}+3$ $y=f(x)$ $-2$ $(-2)^2+3$ $7$ $-1$ $(-1)^2+3$ $4$ $0$ $0^2+3$ $3$ $1$ $1^2+3$ $4$ $2$ $2^2+3$ $7$ Step 2 Use the table to identify the points on the graph. $x$ $y=f(x)$ $(x, y)$ $-2$ $7$ $(-2,7)$ $-1$ $4$ $(-1,4)$ $0$ $3$ $(0,3)$ $1$ $4$ $(1,4)$ $2$ $7$ $(2,7)$ Solution Plot the points on the coordinate plane, and draw a smooth curve through the points. The shape of the graph is a parabola. The domain of the function is all real numbers. The range is $[3,\infty)$. The minimum value is 3. There is no maximum. The function is even. It is decreasing on the interval $(-\infty, 0)$ and increasing on the interval $(0, \infty)$. ### Vertical Line Test The vertical line test uses vertical lines to determine whether a relation is a function. Any two distinct points on a vertical line have the same $x$-coordinate and different $y$-coordinates. So if two or more points in a relation are on the same vertical line, then the $x$-value corresponds to multiple $y$-values. Using vertical lines on a graph to determine whether a relation is a function is called the vertical line test. If any vertical line intersects the graph in more than one point, the graph does not represent a function. To determine whether the graph of a relation represents a function: 1. Draw or imagine vertical lines running through the graph. 2. Look for any places that a vertical line would cross or touch the graph more than one time. These represent an $x$-value that corresponds to more than one $y$-value. • If every vertical line crosses the graph only once, the relation is a function. • If any vertical line crosses the graph more than once, the relation is not a function. Function Not a Function The graph represents the relation $y=x^2$. There is no vertical line that touches the graph more than once. The relation is a function. The graph of the relation is in the shape of a circle. Each vertical line intersects the circle at two different $y$-values. Therefore, the relation is not a function. ### Piecewise Functions A piecewise function consists of separate pieces of the same function. Each piece behaves differently based on the rules of their defined intervals. Some functions are defined in pieces. A piecewise function has different rules applied to different intervals in the domain. To graph a piecewise function, graph the pieces separately over the intervals for which they are defined. Step-By-Step Example Graphing a Piecewise Function Graph the function: f(x)=\left\{\begin{aligned}3x+5&&x \lt 0\\4x+7&&x\ge0\end{aligned}\right. Step 1 Analyze the first piece of the function. When the $x$-values are less than zero, this rule applies: $f(x)=3x+5$ The slope is 3. The $y$-intercept is 5. The interval extends to negative infinity and does not include zero. Step 2 Analyze the second piece of the graph. When the $x$-values are greater than or equal to zero, this equation applies: $f(x)=4x+7$ The slope is 4. The $y$-intercept is 7. The interval extends to infinity and includes zero. Solution Graph the pieces together. The first piece is: $f(x)=3x+5$ It is a line with a slope of 3 and a $y$-intercept of 5. The interval extends to negative infinity. So, use an arrow at the left end of the graph. Since the interval does not include zero, show the $y$-intercept with an empty circle to indicate that the value is not a part of the graph. The second piece is: $f(x)=4x+7$ It is a line with a slope of 4 and a $y$-intercept of 7. The interval extends to infinity, so use an arrow at the right end of the graph. Since the interval includes zero, show the $y$-intercept with a solid circle at the endpoint to indicate that the value is part of the graph.
2020-05-25 13:48:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 68, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6832550764083862, "perplexity": 202.957730993239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347388758.12/warc/CC-MAIN-20200525130036-20200525160036-00138.warc.gz"}
http://mathhelpforum.com/discrete-math/136990-cardinal-exponentiation.html
# Math Help - Cardinal Exponentiation 1. ## Cardinal Exponentiation Right so, I have this major block with set exponentiation and unfortunately we have to do it in my set theory course with cardinals. I'm gonna post the question and the answer and post what (after many months of thinking about it) is the best I can do on my own... Question. If κ, λ and μ are cardinals with 0 < κ ≤ λ. Show that $\mu^{\kappa} \leq \mu^{\lambda}.$ Lecturers Solution. Let κ = P(A), λ = P(B) and μ = P(C). Let f:AB be an injection. Let k:BA be the surjective map defined in the proof of Prop 1.3.1 which exists because κ > 0. Then we define a map $C^A \to C^B$ by G(g)(b) = g(k(b)) (+++). We claim G is injective. If G(g)=G(g′) then g(k(b))=g′(k(b)). Since k is surjective k(b) ranges over all possible elements of A and so g(a)=g′(a) for all aA. Hence, g=g -------------------------------------------------------------------------------------------------------------------- Notice the (+++), this is the part that scrambles my brain, making injections, surjections and bijections. I'm not quite sure what $g$ is either, I seem to think it's a possible injection that takes an element in $C^A$ to $C^B$. But, my attempted answer is slightly different and I want to know if it looks mildly right and if not, can you explain what the map in (+++) actually means! So we have $f: A \to B$ is an injective map. Define $k: B \to A$ as a surjective map from B to A. So, for every $a \in A$ there is a $b \in B$ such that $a = k(b)$. So, now the bit (for me)... We define a map from $C^A \to C^B$ by... $G: g(a) = g(k(b))$... I think I'll just stop there though, the rest of the Q is a bit easier, just the defining the map part that gets me. Having just re-read it I'm pretty sure mine isn't even a map, it's just a renaming of $a$. Right so, I have this major block with set exponentiation and unfortunately we have to do it in my set theory course with cardinals. I'm gonna post the question and the answer and post what (after many months of thinking about it) is the best I can do on my own... Question. If κ, λ and μ are cardinals with 0 < κ ≤ λ. Show that $\mu^{\kappa} \leq \mu^{\lambda}.$ Solution. Let κ = P(A), λ = P(B) and μ = P(C). Let f:AB be an injection. Let k:BA be the surjective map defined in the proof of Prop 1.3.1 which exists because κ > 0. Then we define a map $C^A \to C^B$ by G(g)(b) = g(k(b)) (+++). We claim G is injective. If G(g)=G(g′) then g(k(b))=g′(k(b)). Since k is surjective k(b) ranges over all possible elements of A and so g(a)=g′(a) for all aA. Hence, g=g Notice the (+++), this is the part that scrambles my brain, making injections, surjections and bijections. I'm not quite sure what $g$ is either, I seem to think it's a possible injection that takes an element in $C^A$ to $C^B$. But, my attempted answer is slightly different and I want to know if it looks mildly right and if not, can you explain what the map in (+++) actually means! So we have $f: A \to B$ is an injective map. Define $k: B \to A$ as a surjective map from B to A. So, for every $a \in A$ there is a $b \in B$ such that $a = k(b)$. So, now the bit (for me)... We define a map from $C^A \to C^B$ by... $G: g(a) = g(k(b))$... I think I'll just stop there though, the rest of the Q is a bit easier, just the defining the map part that gets me. Having just re-read it I'm pretty sure mine isn't even a map, it's just a renaming of $a$. I will come back later and re-read your proof but you're idea is correct. 3. Originally Posted by Drexel28 I will come back later and re-read your proof but you're idea is correct. Cool thanks, it's not exactly a full proof but I wanted to at least put down my thoughts. Note though, the top proof is NOT MINE, that's the actual answer. My attempt is the stuff down at the bottom. Kinda ironic me debating set theory in one thread and trying to say it's all wrong then getting stuck on what are probably straightforward questions in this thread... 4. Deadstar, you lost me when you wrote 'P(A)'. What is 'P'? Power set? I don't see that you need it. Let '<' and '=< ' stand for the cardinal 'less than' and 'less than or equal to' relations. Let '^' stand for cardinal exponetiation. Let '\' stand for binary complement. Theorem: If u, K, L are cardinals and 0 < K =< L, then u^K =< u^L. Proof: Let F = {f | f is a function from K into u}. Let H = {h | h is a function from L into u}. It suffices to show an injection from F into H. Since K is not empty, if u is empty, then F is empty, and we're done. So suppose u is not empty; so let c be in u. Let G be a function on F defined by: G(f) = f union {<x c> | x in L\K}. It's easy to see that G is an injection. 5. Originally Posted by MoeBlee Deadstar, you lost me when you wrote 'P(A)'. What is 'P'? Power set? I don't see that you need it. Let '<' and '=< ' stand for the cardinal 'less than' and 'less than or equal to' relations. Let '^' stand for cardinal exponetiation. Let '\' stand for binary complement. Theorem: If u, K, L are cardinals and 0 < K =< L, then u^K =< u^L. Proof: Let F = {f | f is a function from K into u}. Let H = {h | h is a function from L into u}. It suffices to show an injection from F into H. Since K is not empty, if u is empty, then F is empty, and we're done. So suppose u is not empty; so let c be in u. Let G be a function on F defined by: G(f) = f union {<x c> | x in L\K}. It's easy to see that G is an injection. Thanks for that but to be honest that looks quite different to what we've done and frankly given the amount I'm struggling with this course I'm gonna stay with proofs that look like the one in my post. I don't know what P stands for, I assume power set. It's not my solution is the one given by the lecturer. My attempt is at the end of the post. Also, what is <x c>? I have no idea what [f union {<x c> | x in L\K}] means but I have a very slight idea how the injections in my lecturers post were arrived at so I'm gonna have to stick with ones similar to it. 6. I don't know where 'A', 'B', and 'C' come from or what they are supposed to be in the writeup you mentioned. My approach is extremely simple and I think the first thing one might think of. We know that u^K is just the number of functions from K into u. We know that u^L is just the number of functions from L into u. So let F be the set of functions from K into u; and let H be the set of functions from L into u. An injection is obvious: Let G be the function from F into H defined as follows: For a function f from K into u, just extend it to a function from L into u by adding to f all the ordered pairs <x c> where x is any member of L but not a member of K and c is some chosen member of u. The idea is so simple you can even draw it: For a function f in K, do the following: Draw a circle for K and a separate circle for u. Then put dots inside those circles, then lines from the dots in K to their values under the function f (i.e. each member of K gets matched to some member of u). Then add an extension to the circle for K; this extension has all the members of L that are not in K, and match every dot in that extension to the same member (call it 'c') in u. And you see that you now have a function from L into u. Call that new function 'G(f)'. And it's easy to see that if f and f* are different functions from K into u then G(f) and G(f*) are different. /
2014-07-31 16:37:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8279364109039307, "perplexity": 867.5443996988654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273381.44/warc/CC-MAIN-20140728011753-00430-ip-10-146-231-18.ec2.internal.warc.gz"}
http://en.wikipedia.org/wiki/Cuban_prime
Cuban prime A cuban prime is a prime number that is a solution to one of two different specific equations involving third powers of x and y. The first of these equations is: $p = \frac{x^3 - y^3}{x - y},\ x = y + 1,\ y>0$[1] and the first few cuban primes from this equation are (sequence A002407 in OEIS): 7, 19, 37, 61, 127, 271, 331, 397, 547, 631, 919, 1657, 1801, 1951, 2269, 2437, 2791, 3169, 3571, 4219, 4447, 5167, 5419, 6211, 7057, 7351, 8269, 9241, 10267, 11719, 12097, 13267, 13669, 16651, 19441, 19927, 22447, 23497, 24571, 25117, 26227 The general cuban prime of this kind can be rewritten as $\tfrac{(y+1)^3 - y^3}{y + 1 - y}$, which simplifies to $3y^2 + 3y + 1$. This is exactly the general form of a centered hexagonal number; that is, all of these cuban primes are centered hexagonal. As of January 2006 the largest known has 65537 digits with $y = 100000845^{4096}$,[2] found by Jens Kruse Andersen. The second of these equations is: $p = \frac{x^3 - y^3}{x - y},\ x = y + 2,\ y>0.$[3] This simplifies to $3y^2 + 6y + 4$. With a substitution $y = n - 1$ it can also be written as $3n^2 + 1, \ n>1$. The first few cuban primes of this form are (sequence A002648 in OEIS): 13, 109, 193, 433, 769, 1201, 1453, 2029, 3469, 3889, 4801, 10093, 12289, 13873, 18253, 20173, 21169, 22189, 28813, 37633, 43201, 47629, 60493, 63949, 65713, 69313 The name "cuban prime" has to do with the role cubes (third powers) play in the equations, and has nothing to do with Cuba. Notes 1. ^ Cunningham, On quasi-Mersennian numbers 2. ^ Caldwell, Prime Pages 3. ^ Cunningham, Binomial Factorisations, Vol. 1, pp. 245-259
2014-09-18 06:33:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7150506377220154, "perplexity": 185.4469303724183}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657125654.84/warc/CC-MAIN-20140914011205-00228-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://tex.stackexchange.com/questions/480952/error-in-texmaker-building-a-file-error-could-not-start-the-command-aux
# Error in Texmaker building a file: Error : could not start the command : “.aux” I am trying to transition from a very old and no longer supported latex platform to using Miktex with texmaker. There seems to be a problem with bibtex, but actually the problem may be more than just that now, since I may have messed things up trying to fix it on my own (knowing nothing to begin with, unfortunately). At first it would build fine except it wouldn't do the bibtex at all. Thinking maybe it couldn't find my *.bib file, I put "C:/Users/cohen5/Documents/Research/Testrefs" %.aux into the Bib(la)tex command in the Configure Texmaker box. I also tried it without the "Testrefs" at the end, and also with "Testrefs.bib" at the end instead. None work. So now it will not build at all, getting the error: Error : could not start the command : "C:/Users/cohen5/Documents/Research/Testrefs" "test".aux I then tried uninstalling Texmaker and then reinstalled it, hoping it would reset the Bib(la)tex command to what it used to be, but no such luck. So I guess this is two questions. (1) How do I get it back to building at all? and (2) Once it will build again, how do I get it to do the bibliography? Here is a test file: \documentclass[pra]{revtex4-2} \begin{document} \maketitle Here is one citation \cite{Ref1} and here is another \cite{Ref2}. \bibliography{Testrefs} \end{document} P.S. I have also tried with the previous version of revtex (Revtex4-1). Here is my Testrefs.bib file: @ARTICLE{Ref1, AUTHOR={Someone1a and Someone1b}, TITLE={Title1}, JOURNAL={Journal1}, VOLUME={44}, PAGES={5555}, YEAR={1999} } @ARTICLE{Ref2, AUTHOR={Someone2a and Someone2b}, TITLE={Title2}, JOURNAL={Journal2}, VOLUME={55}, PAGES={7777}, YEAR={2000} } • First make sure you are happy that MiKTeX is up to date, use MiKTeX-console as your launch point where you change settings or call TeXworks on the left or run console commands. When you do updates or other significant changes make sure to run the TASKS commands to refresh filename database and fonts. Put your testfile and bib file in a fresh folder and opening the testfile with Texmaker I get a good result by using the top left pulldown to select pdfLaTeX+MakeIndex+BibTeX then a click on the green arrow works as expected. It may need TWO runs on some occasions – KJO Mar 22 at 18:11 • I just updated Miktex. I do not see Texworks on the left and don't understand what you mean by "or run console commands". I also don't know what you mean by "run the TASKS commands". I put those files in a fresh folder as you suggest, but still get the same error message. – Scott Mar 22 at 18:31 • I meant I do not see texmaker, I did not mean to write texworks. – Scott Mar 22 at 18:39 • Ok so I should have checked windows MiKTeX-console has TeXworks built in are you on a mac perhaps ? – KJO Mar 22 at 18:39 • I also later discovered errors in my .bib file, which I have no idea where they came from. I'm pretty sure they didn't use to be there, they include strange characters that I am sure I never put there, so it is rather strange. Texmaker was doing other strange things, so I have changed over to using TexStudio and am very happy with using that. – Scott Mar 25 at 3:03
2019-04-23 06:40:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6245014071464539, "perplexity": 1463.7033629182902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578593360.66/warc/CC-MAIN-20190423054942-20190423075902-00076.warc.gz"}
http://tex.stackexchange.com/questions/49850/indexing-macros-without-a-leading-backslash-in-a-dtx-file
# Indexing Macros without a leading backslash in a dtx file I'm documenting a file containing several biblatex bibmacros and booleans and would like to create an index via \EnableCrossrefs, but % \begin{macro}{mybibmacro} % \begin{macrocode} \newbibmacro{mybibmacro}{} % \end{macro} produces the entry \mybibmacro in the index and of course all further occurrences of mybibmacro in the code are ignored. Is there a way to avoid the leading backslash? Of course it would be perfect if future occurrences such as \usebibmacro{mybibmacro} would also be indexed under mybibmacro. \newbool{mybool} has exactly the same problem. Combining this to a real example yields: % \iffalse \documentclass{ltxdoc} \EnableCrossrefs \CodelineIndex \begin{document} \DocInput{test.dtx} \end{document} % \fi % % \DescribeEnv{mybibmacro} % This is my bibmacro. % % \DescribeMacro{mybool} % The boolean \verb|mybool| will determine, if \verb|mybibmacro| is executed. % % \StopEventually{\PrintIndex} % % \begin{macro}{mybool} % Here we define the boolean. % \begin{macrocode} \newbool{mybool} % \end{macrocode} % \end{macro} % % \begin{environment}{mybibmacro} % Here we define the bibmacro. % \begin{macrocode} \newbibmacro{mybibmacro}{} % \end{macrocode} % \end{environment} % % Now the macro is executed depending on the status of \verb|mybool|. Ideally, both should be indexed as being used here. % \begin{macrocode} \ifbool{mybool}{\usebibmacro{mybibmacro}}{} % \end{macrocode} % % \Finale Running this through latex and makeindex yields In the index mybool received a leading backslash and neither the usage of mybool nor of mybibmacro is indexed in line 3. Also, the listing of mybibmacro under environments is slightly confusing, it might be good to get rid of that and of course optimal if one could add other categories in addition to environment (such as boolean, bibmacro etc.). - Probably you want to (ab)use or base something on \begin{environment} rather than \begin{macro} as that's not expecting the backslash form First giving an extension working for environment usage as it is simpler, a more complete example in the next section. here is an example modifying doc so that uses of \begin{foo} in macro code are not indexed as uses of \begin but as uses of the environment foo \end is automatically excluded from the index list. test2.sty \def\macro@finish{% \macro@namepart \ifx\macro@namepart\xbegin \expandafter\macro@grabname \else \ifx\macro@namepart\xend \else \ifnot@excluded \edef\@tempa{\noexpand\SpecialIndex{\bslash\macro@namepart}}% \@tempa \fi \fi\fi} \def\xbegin{begin} \def\xbegin{end} \begingroup \lccode$$\{ \lccode$$\} \lowercase{\endgroup \def\macro@grabname\fi(#1){% (#1)% \SpecialEnvIndex{#1}% } } test2.dtx % \iffalse \documentclass{ltxdoc} \usepackage{test2} \EnableCrossrefs \CodelineIndex \begin{document} \DocInput{test2.dtx} \end{document} % \fi % % \DescribeMacro{\mymacro} % Some meaningful description. % % \StopEventually{\PrintIndex} % % \begin{macro}{\mymacro} % Here we define the macro. % \begin{macrocode} \newcommand{\mymacro}{hello} % \end{macrocode} % \end{macro} % \begin{environment}{myenv} % Here we define the env. % \begin{macrocode} \newenvironment{myenv}{\par[[[}{]]]\par} % \end{macrocode} % \end{environment} % % And here we use it. % \begin{macrocode} \PackageInfo{test2}{\mymacro} % \end{macrocode} % \begin{macrocode} \def\foo{\begin{myenv}kkk\end{myenv}} % \end{macrocode} % \clearpage % \begin{macrocode} \def\foobar{\begin{myenv}jjj\end{myenv}} % \end{macrocode} % % \Finale Process this with: pdflatex test2.dtx makeindex -s gind.ist test2 pdflatex test2.dtx Note that doc doesn't care about how a macro is defined just what it looks like, \begin{environment} starts a definition but shows the macro without a backslash and indexes it under "environment" (standard doc). The addition here is that begin{foo} gets indexed as an environment use. So if you have a package with a definition form for bibmacro you and a use as \newbibmacro you would need to copy the definitions of environment from doc, including the above extension, to a set of macros that are the same but with environment changed to bibmacro and \begin changed to \usebibmacro and then again for boolean or any other non-backslash define/use categories that you need. Full BibLaTeX version So, taking the above and then basically parameterising the environment code so it works for environment bibbool and bibmacro results in: testbool.dtx % \iffalse \documentclass{ltxdoc} \usepackage{test2} \EnableCrossrefs \CodelineIndex \begin{document} \DocInput{testbool.dtx} \end{document} % \fi % % \DescribeBibMacro{mybibmacro} % This is my bibmacro. % % \DescribeBibBool{mybool} % The boolean \verb|mybool| will determine, if \verb|mybibmacro| is executed. % % \DescribeMacro{\texmacro} % Checking I haven't broken the original % indexing macros for \verb|\texmacro|. % % \StopEventually{\PrintIndex} % % \begin{bibbool}{mybool} % Here we define the boolean. % \begin{macrocode} \newbool{mybool} % \end{macrocode} % \end{bibbool} % % \begin{macro}{\texmacro} % Here we define the boolean. % \begin{macrocode} \def\texmacro{} % \end{macrocode} % \end{macro} % % \begin{bibmacro}{mybibmacro} % Here we define the bibmacro. % \begin{macrocode} \newbibmacro{mybibmacro}{} % \end{macrocode} % \end{bibmacro} % % Now the macro is executed depending on the status of \verb|mybool|. Ideally, both should be indexed as being used here. % \begin{macrocode} \ifbool{mybool}{\usebibmacro{mybibmacro}}{} % \end{macrocode} % % \begin{macrocode} \texmacro % \end{macrocode} % % \Finale test2.sty \def\bibmacro{\def\macro@type{bibmacro}\environment} \let\endbibmacro\endenvironment \def\bibbool{\def\macro@type{bibbool}\environment} \let\endbibbool\endenvironment \def\macro@type{environment}%% default \def\macro@finish{% \macro@namepart \ifx\macro@namepart\xbegin \def\macro@type{environment}% \else\ifx\macro@namepart\xifbool \def\macro@type{bibbool}% \let\macro@namepart\xbegin \else\ifx\macro@namepart\xusebibmacro \def\macro@type{bibmacro}% \let\macro@namepart\xbegin \fi\fi\fi \ifx\macro@namepart\xbegin \expandafter\macro@grabname \else \ifx\macro@namepart\xend \else \ifnot@excluded \edef\@tempa{\noexpand\SpecialIndex{\bslash\macro@namepart}}% \@tempa \fi \fi\fi} \def\xbegin{begin} \def\xifbool{ifbool} \def\xusebibmacro{usebibmacro} \def\xend{end} \begingroup \lccode$$\{ \lccode$$\} \lowercase{\endgroup \def\macro@grabname\fi(#1){% (#1)% \SpecialxEnvUseIndex\macro@type{#1}% } } \def\SpecialxEnvUseIndex#1{% \expandafter\SpecialxxEnvUseIndex\expandafter{#1}} \def\SpecialxxEnvUseIndex#1#2{\@bsphack \special@index{#2\actualchar{\string\ttfamily\space#2} (#1)}% \special@index{#1s:\levelchar#2\actualchar{\string\ttfamily\space#2}}\@esphack} \def\SpecialMainEnvIndex{% \expandafter\SpecialMainxEnvIndex\expandafter{\macro@type}} \def\SpecialMainxEnvIndex#1#2{\@bsphack\special@index{% #2\actualchar {\string\ttfamily\space#2} (#1)% \encapchar main}% \special@index{#1s:\levelchar#2\actualchar{% \string\ttfamily\space#2}\encapchar main}\@esphack} \def\DescribeBibBool{% \leavevmode\@bsphack\begingroup\MakePrivateLetters \def\macro@type{bibbool}\begingroup\Describe@Env} \def\DescribeBibMacro{% \leavevmode\@bsphack\begingroup\MakePrivateLetters \def\macro@type{bibmacro}\begingroup\Describe@Env} \def\DescribeBibEnv{% \leavevmode\@bsphack\begingroup\MakePrivateLetters \def\macro@type{environment}\begingroup\Describe@Env} \def\Describe@Env#1{\endgroup \marginpar{\raggedleft\PrintDescribeEnv{#1}}% \SpecialEnvIndex{#1}\endgroup\@esphack\ignorespaces} \def\SpecialEnvIndex{% \expandafter\SpecialxEnvIndex\expandafter{\macro@type}} \def\SpecialxEnvIndex#1#2{\@bsphack \index{#2\actualchar{\string\ttfamily\space#2} (#1)\encapchar usage}% \index{#1s:\levelchar#2\actualchar{\string\ttfamily\space#2}\encapchar usage}\@esphack} - Thank you, I have updated my question providing an MWE that will hopefully clear things up. – Jonathan Apr 16 '12 at 14:13 Perfect! Thank you very much! I don't quite understand why the definitions get a separate line for their entry in the respective category, but it more than does the job! – Jonathan Apr 16 '12 at 23:43 sorry there was a #1 that should be #2 while writing to the index file, this is fixed and test2.sty and the image updated – David Carlisle Apr 17 '12 at 9:10 Why doesn't this work when I paste the content of the sty file into the header of the dtx file? I get the error ! Too many }'s. l.45 \fi\fi} (that's the last line of \macro@finish). It works with input so I really don't understand what could have happened .... – Jonathan Apr 21 '12 at 17:10 wel I think it's because of the doc package convention of guarding the top level document section with %\iffalse...\fi TeX's if-fi scanner needs to see balanced if-fi pairs but on the initial pass, the doc package hasn't been loaded so \ifnot@excluded isn't defined but the matching \fi is a primitive. There would be ways to avoid that but simplest is to keep it in a separate .sty :-) – David Carlisle Apr 21 '12 at 19:17
2016-06-27 16:45:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9495770335197449, "perplexity": 5605.958726469737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00047-ip-10-164-35-72.ec2.internal.warc.gz"}
https://www.electrical4u.net/
### Can We Use AC MCB On DC Circuit or In place... #### Gate Solved Continue to the category ### Gate ME-2007 Question Paper With Solutions Q.1. (A) 0 (B) 1 (C) 25 (D) undefined Answer: (B) Explanation: Q.2. If a square matrix A is real and... ### Gate CS-2015-3 Question Paper With Solutions Q. 1 Extreme focus on syllabus and studying for tests has become such a dominant concern of... ### Gate ME-2008 Question Paper With Solutions Q 1. Answer: (C) Explanation: Q 2 (A)- 0.99 (B)- 0.16 (C) 0.16 (D) 0.99 Answer: (D) Explanation: Q 3. Answer: (B) Explanation: Q 4. A... ### Gate ME-2009 Question Paper With Solutions Q 1. For a matrix the transpose of the matrix is equal to the... ### Gate ME-2010 Question Paper With Solutions Q 1. The parabolic arc is revolved around the x-axis. The volume of the solid...
2020-09-24 17:52:17
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9370589256286621, "perplexity": 8077.38659840893}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400219691.59/warc/CC-MAIN-20200924163714-20200924193714-00592.warc.gz"}
http://math.stackexchange.com/questions/338822/how-to-show-for-a-psd-matrix-a-that-left-left-ai-right-1-right
# How to show for a PSD matrix $A$ that $\left \| \left ( A+I \right )^{-1} \right \| \leq \frac{1}{1+\sigma _{\min}\left ( A \right )}$? If $A \in \mathbb{C}^{n \times n}$ is positive semidefinite, show that $\left \| \left ( A+I \right )^{-1} \right \| \leq \frac{1}{1+\sigma _{\min}\left ( A \right )}$, where $\sigma _{\min}\left ( A \right )$ is the smallest singular value of $A$, and $\left \| \cdot \right \|$ is any unitarily invariant induced matrix norm. - For a general unitarily invariant matrix norm $\|\cdot\|$ (that is not necessarily induced), we have $\|X\|\ge\|X\|_2$ (cf. Horn and Johnson, Matrix Analysis, p.308, corollary 5.6.35). Unitarily diagonalize $A$ and put $X=(A+I)^{-1}$, we see that you should have the inequality sign flipped. However, on $\mathbb{C}^{n\times n}$, the only matrix norm that is both induced and unitarily invariant is the spectral norm $\|A\|_2=\sigma_\max(A)$ (again, see the above-mentioned corollary and its proof). And your inequality is actually an equality when the spectral norm is used. So, the direction of the inequality sign does not matter in this case. Although the condition that $\|\cdot\|$ is induced is redundant, it is imposed here perhaps to make the proof easier. And the direction inequality sign is perhaps deliberate, so that one doesn't need to know or prove the fact that the spectral norm is the only unitarily invariant induced matrix norm for complex matrices. - Thanks for your explanation, it was really helpful. So could you please tell me how to prove the equality with $\left \| \cdot \right \|_2$? –  user79230 Mar 23 '13 at 21:02 @Sam As was said in the answer, unitarily diagonalize $A$. Since the norm is unitarily invariant, you may assume that $A$ is a nonnegative diagonal matrix can calculate $\|(A+I)^{-1}\|_2$ directly. –  user1551 Mar 23 '13 at 22:29 Thanks so much. –  user79230 Mar 23 '13 at 22:47 Are you completely sure about the $\leq$ sign, shouldn't it be $\geq$? I tried to prove your statement but got a different answer. We can use some facts about singular values of positive definite matrices: $$\|(A+I)^{-1}\|_2\|A+I\|_2=\frac{\sigma_{\max}(A+I)}{\sigma_{\min}(A+I)}$$ $$\|A+I\|_2=\sqrt{\sigma_{max}((A+I)^{T}(A+I))}=\sqrt{\sigma_{max}(A+I) \circ\sigma_{max}(A+I)}$$ where $\circ$ - is Hadamard product. $$\|(A+I)^{-1}\|_2=\frac{\sigma_{\max}(A+I)}{\sigma_{\min}(A+I))\sqrt{\sigma_{max}(A+I) \circ\sigma_{max}(A+I)}}$$ $$\sigma_{\max}(A+I)=1+\sigma_{\max}(A)$$ $$\sigma_{\min}(A+I)=1+\sigma_{\min}(A)$$ Then we plug it into the first equation: $$\|(A+I)^{-1}\|_2=\frac{1+\sigma_{\max}(A)}{1+\sigma_{\min}(A))\sqrt{(1+\sigma_{\max}(A)) \circ(1+\sigma_{\min}(A))}}$$ or $$\|(A+I)^{-1}\|_2=\frac{(1+\sigma_{\max}(A))^{\frac{1}{2}}}{(1+\sigma_{\min}(A))^{\frac{3}{2}}}$$ and assuming that $\sigma_{\max}>\sigma_{\min}$ And we get that: $$\|(A+I)^{-1}\|_2\geq\frac{1}{1+\sigma_{\max}(A))}$$ $$\|(A+I)^{-1}\|_2\geq\frac{1}{1+\sigma_{\min}(A))}$$ - Would not $\sigma_\max(A+I)$ be a number? So, what is this Hadamard product? –  Tapu Mar 23 '13 at 16:33 @Tapu just miltiplication. In the "fact" I used standard notation as in textbooks. –  Caran-d'Ache Mar 23 '13 at 16:34 @Caran-d'Ache, why $\sigma_{max}((A+I)^{T}(A+I)) = \sigma_{max}(A+I) \circ\sigma_{max}(A+I)$? The inequality sign is as asked in the question. –  user79230 Mar 23 '13 at 17:21
2015-07-05 20:07:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9700958728790283, "perplexity": 219.44558005076442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097710.32/warc/CC-MAIN-20150627031817-00190-ip-10-179-60-89.ec2.internal.warc.gz"}
https://gamedev.stackexchange.com/questions/143046/tiled-deferred-rendering-frustum-calculation-problem
# Tiled deferred rendering frustum calculation problem I know there are plenty of tutorials how to create tiled frustum with compute shaders, but Im trying to understand the steps and want to build it on the cpu with pure C# code. I translated a lof of Intels approach (which can be found at intels post: Deferred Rendering for Current and Future Rendering Pipelines, sorry I have to earn more reputation to link this) to c# code, which was a kinda hard work because I am not familiar with compute shaders. So now I got some problems with the algorithm. 1. A single light is contained by a lot of tiles, even if the AttenuationEnd(or sphere radius) of the light is set < 0,1. It seems to have no effect. (the image below shows the rays of each tile with light info) White: Tiles with no light | Red: Tiles with light 2. I dont get right results if calculating a frustum ray from its corners. The following image shows the partially rendered (from 5 to 5.1) rays with origin at 0, 0, 0. Shouldnt there be more space between each tile at 70 degree fov? This is the code Im using. I commented out a lof of stuff, also with debug info, if possible. vec4[] frustumPlanes = new vec4[6]; // There is 1 light with following properties: // Position : 5, 5, 5 // AttenuationEnd : 0.1 (should fit inside some frusta) // Use a simple 70 fov (70 / 180 * PI) perspective setup. // width = 486 // height = 279 // near = 0.1 // far = 10 mat4 projection = scene.Camera.ProjectionMatrix; int TILES_COUNT = 16; for (int tX = 0; tX < TILES_COUNT; tX++) { for (int tY = 0; tY < TILES_COUNT; tY++) { // tileBias lies between 1 and -1, decreases from 1 to -1 vec2 tileBias = new vec2(1 - tX / (TILES_COUNT - 1f), 1 - tY / (TILES_COUNT - 1f)) * 2f - 1f; vec4 c1 = new vec4(-projection.m00, projection.m01, tileBias.x, projection.m03); vec4 c2 = new vec4(projection.m10, -projection.m11, tileBias.y, projection.m13); vec4 c4 = new vec4(projection.m30, projection.m31, 1.0f, projection.m33); // c1 = { -0.82, 0, 1, 0 } < 3rd index is tileBias.x for tX = 0 // c2 = { 0, -1.43, 1, 0 } < 3rd index is tileBias.y for tY = 0 // c3 = { 0, 0, 1, 0 } frustumPlanes[0] = c4 - c1; frustumPlanes[1] = c4 + c1; frustumPlanes[2] = c4 - c2; frustumPlanes[3] = c4 + c2; // The following 3rd and 4th index signs confuse me, why +/-? frustumPlanes[4] = new vec4(0f, 0f, 1f, -0.1f); // near frustumPlanes[5] = new vec4(0f, 0f, 1f, 10f); // far // Normalizing corners for (int i = 0; i < 4; i++) frustumPlanes[i] *= 1f / frustumPlanes[i].xyz.Length; // An attempt of converting the frustum into a ray. vec3 rayDir = frustumPlanes[0].xyz + frustumPlanes[1].xyz + frustumPlanes[2].xyz + frustumPlanes[3].xyz; rayDir = rayDir.Normalized; // Calculates whether a light is inside the frustum bool anyLightInFrustum = false; for (int i = 0; i < lights.Count; i++) { Light light = lights[i]; vec4 lightPos = new vec4(light.Pos, 1.0f); bool inFrustum = true; for (int j = 0; j < 6; j++) { float dist = vec4.Dot(frustumPlanes[j], lightPos); inFrustum = inFrustum && (dist >= -light.AttenuationEnd); } if (inFrustum) anyLightInFrustum = true; } // Set the color to red, if a light is contained vec4 color = new vec4(1); if (anyLightInFrustum) color.rgb = vec3.UnitX; // Draws a ray with a range from {4} to {5}, origin at {0, 0, 0} DrawRayWithColor(tessellator, color, rayDir, 5.0f, 5.1f); } } Maybe some of you can give me some hints for this. Even if I just did a translation mistake. Try to use tileBias within a range of 0 to 2. You also have to use a cross product of two frustum corners. For example, vec3 rayDir = vec3.Cross(frustumPlanes[0].xyz, frustumPlanes[2].xyz);`.
2022-05-25 01:39:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25582221150398254, "perplexity": 5383.246344220211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662577757.82/warc/CC-MAIN-20220524233716-20220525023716-00183.warc.gz"}
http://physics.aps.org/story/v2/st11
# Focus: Solid Acts Like a Liquid Published September 4, 1998  |  Phys. Rev. Focus 2, 11 (1998)  |  DOI: 10.1103/PhysRevFocus.2.11 #### Supersoft Transition Metal Silicides E. G. Moroni, R. Podloucky, and J. Hafner Published August 31, 1998 Molecular beam epitaxy allows researchers to create thin slabs of materials with extremely precise control for use in the electronics industry and in basic physics research. A “substrate” crystal such as silicon serves as the base upon which atoms from a beam are deposited in layers. The deposited atoms tend to line up with those in the substrate, or at least in patterns strongly influenced by substrate atoms. Researchers have discovered many atomic configurations (“pseudomorphic phases”) that exist only in epitaxial films and not in the independently crystallized, or “bulk” material. Physical properties can differ dramatically between bulk and epitaxial phases. For example, some of the transition metal silicides are bulk semiconductors but become metals under appropriate thin film conditions. Researchers hope to exploit that property to make microscopic metallic connections to semiconductor chips. Elio Moroni, of the Vienna Technical University, says that physicists must first understand and learn to control the many complicated atomic configurations available in these materials before they can be exploited. Using computer simulations based only on basic quantum mechanics, Moroni and his colleagues studied elastic properties, which are closely related to electronic properties–the ones physicists ultimately need to control for industrial applications. They studied the effects of straining epitaxial crystal structures by varying the atomic spacing of the substrate. The team was surprised to find that variations over a range of 0.3 Å along certain directions with selected configurations of $CoSi$, $NiSi$, and $FeSi2$ cost no energy. The energy in the metal-silicon bonds and the total volume remained constant. Under these specific strains the materials seemed to react the way a liquid would–if squeezed in one direction, they would simply expand in another. The usual “harmonic” approximation for solids, by contrast, is that bonds act like coiled springs–as they are compressed or stretched, the energy increases with the square of the change in length. The authors explain the “supersoft” quality as a result of the many epitaxial phases available to these materials. As the bond lengths are forced to change, the crystals find alternate configurations with approximately the same energy and volume. Hans von Känel, of the Swiss Federal Institute of Technology (ETH) in Zurich, calls the results “absolutely amazing,” since the elasticity ofepitaxial phases is usually completely normal, responding like a spring. He says the community had assumed that the elasticity of these heavily studied materials was well understood and expects the paper to stimulate many of his colleagues to begin trying to verify the Viennese group’s predictions.
2013-05-25 17:04:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48584210872650146, "perplexity": 1969.0141570112305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705976722/warc/CC-MAIN-20130516120616-00093-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.physicsoverflow.org/user/grue/history
Recent history for grue 2 months ago question answered What is the differential cross section of Møller scattering in square meters 3 months ago question answered What is the differential cross section of Møller scattering in square meters 4 months ago question answered What is the differential cross section of Møller scattering in square meters 6 months ago edited a comment What is the differential cross section of Møller scattering in square meters 6 months ago posted a comment What is the differential cross section of Møller scattering in square meters 6 months ago posted a comment What is the differential cross section of Møller scattering in square meters 6 months ago edited a comment What is the differential cross section of Møller scattering in square meters 6 months ago posted a comment What is the differential cross section of Møller scattering in square meters 6 months ago edited a comment What is the differential cross section of Møller scattering in square meters 6 months ago posted a comment What is the differential cross section of Møller scattering in square meters 6 months ago question commented on What is the differential cross section of Møller scattering in square meters 6 months ago question answered What is the differential cross section of Møller scattering in square meters 7 months ago edited a comment What is the differential cross section of Møller scattering in square meters 7 months ago posted a comment What is the differential cross section of Møller scattering in square meters 7 months ago question commented on What is the differential cross section of Møller scattering in square meters 7 months ago received upvote on question What is the differential cross section of Møller scattering in square meters 7 months ago question is edited What is the differential cross section... 7 months ago posted a question What is the differential cross section... 1 year ago posted a question Does a vertex contribute (i e gamma ^ ... 2 years ago received upvote on question Does one have to account for identical particles twice
2022-10-01 07:44:39
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8499532341957092, "perplexity": 819.0761410066106}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00781.warc.gz"}
http://tug.org/pipermail/texworks/2011q3/004670.html
# [texworks] problem with shortcut Stefan Löffler st.loeffler at gmail.com Wed Jul 27 15:35:48 CEST 2011 ```Hi, On 2011-07-27 14:50, Carlo Marmo wrote: > > *) The file must have the correct name (shortcuts.ini) and must be > in the correct path > *) You must restart Tw before any changes to that file take effect. > *) If done correctly, the shortcut should appear next to the > *) If several functions have the same shortcut (either by default > or by your customization), none of them are accessible via the > shortcut anymore > > > OK. Point 3: nothing appears in my menu In that case, the file is indeed not recognized properly, it seems (as opposed to some conflicting shortcuts, etc.). > > > Which version of Tw do you use, and where did you get it from > (MiKTeX, a web page, ...)? > > > Tw version 0.4.0. r.759. I got it > > > > Where did you create your shortcuts.ini file? > > > .Carlo/texworks/configuration This seems like a weird place to me, somehow (though since it's not an absolute path and I don't know your setup, I can tell for sure). Is the folder you get when you click on "Scripts > Scripting TeXworks > Show Scripts Folder" a sibling of that folder (i.e., they have the same parent folder, namely .Carlo/texworks). If not, you picked the wrong folder. According to
2017-10-23 06:21:40
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.823988676071167, "perplexity": 6967.497303146565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825700.38/warc/CC-MAIN-20171023054654-20171023074654-00335.warc.gz"}
http://mathandmultimedia.com/tag/permutations-of-objects/
## Wedding Guests and Circular Permutations In a wedding banquet, guests are seated in circular table for four. In how many ways can the guests be seated? We have learned that the number of permutations of $n$ distinct objects on a straight line is $n!$. That is, if we seat the four guests Anna, Barbie, Christian, and Dorcas, on chairs in on a straight line they can be seated in $4 \times 3 \times 2 \times 1 = 24$ ways (see complete list). However, circular arrangement is slightly different. Take the arrangement of guests A, B, C, D as shown in the first figure.  The four possible seating arrangements are just a single permutation: in each table, the persons on the left and on the right of each guest are still the same persons. For example, in any of the tables, B is on the left hand side of A and D is on the right hand side of A. In effect, the  four  linear permutations ABCD, BCDA, CDAB, and DABC are  as one in circular permutation. This means that the number of linear permutations of 4 persons is four times its number of circular permutations.  Since the number of  all possible permutations of four objects is 4!, the number of circular permutations of four objects is $\frac{4!}{4}$. » Read more
2019-01-16 16:51:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5215448141098022, "perplexity": 530.0019408882769}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657555.87/warc/CC-MAIN-20190116154927-20190116180927-00345.warc.gz"}
https://www.global-sci.org/intro/article_detail/jms/9959.html
Volume 47, Issue 3 Complete Convergence for Weighted Sums of Negatively Superadditive Dependent Random Variables J. Math. Study, 47 (2014), pp. 287-294. Published online: 2014-09 Cited by Export citation • Abstract Let $\{X_n,n\geq1\}$ be a sequence of negatively superadditive dependent (NSD, in short) random variables and $\{a_{nk}, 1\leq k\leq n, n\geq1\}$ be an array of real numbers. Under some suitable conditions, we present some results on complete convergence for weighted sums $\sum_{k=1}^na_{nk}X_k$ of NSD random variables by using the Rosenthal type inequality. The results obtained in the paper generalize some corresponding ones for independent random variables and negatively associated random variables. • Keywords Negatively superadditive dependent random variables, Rosenthal type inequality, complete convergence. 60F15 1066705362@qq.com (Yu Zhou) 1046549063@qq.com (Fengxi Xia) cy19921210@163.com (Yan Chen) wxjahdx2000@126.com (Xuejun Wang) • BibTex • RIS • TXT @Article{JMS-47-287, author = {Yu and Zhou and 1066705362@qq.com and 13263 and School of Mathematical Sciences, Anhui University, Hefei, Anhui 230601, P.R. China and Yu Zhou and Fengxi and Xia and 1046549063@qq.com and 13264 and School of Mathematical Sciences, Anhui University, Hefei, Anhui 230601, P.R. China and Fengxi Xia and Yan and Chen and cy19921210@163.com and 13265 and School of Mathematical Sciences, Anhui University, Hefei, Anhui 230601, P.R. China and Yan Chen and Xuejun and Wang and wxjahdx2000@126.com and 12947 and School of Mathematical Sciences, Anhui University, Hefei, Anhui 230601, P.R. China and Xuejun Wang}, title = {Complete Convergence for Weighted Sums of Negatively Superadditive Dependent Random Variables}, journal = {Journal of Mathematical Study}, year = {2014}, volume = {47}, number = {3}, pages = {287--294}, abstract = { Let $\{X_n,n\geq1\}$ be a sequence of negatively superadditive dependent (NSD, in short) random variables and $\{a_{nk}, 1\leq k\leq n, n\geq1\}$ be an array of real numbers. Under some suitable conditions, we present some results on complete convergence for weighted sums $\sum_{k=1}^na_{nk}X_k$ of NSD random variables by using the Rosenthal type inequality. The results obtained in the paper generalize some corresponding ones for independent random variables and negatively associated random variables. }, issn = {2617-8702}, doi = {https://doi.org/10.4208/jms.v47n3.14.04}, url = {http://global-sci.org/intro/article_detail/jms/9959.html} } TY - JOUR T1 - Complete Convergence for Weighted Sums of Negatively Superadditive Dependent Random Variables AU - Zhou , Yu AU - Xia , Fengxi AU - Chen , Yan AU - Wang , Xuejun JO - Journal of Mathematical Study VL - 3 SP - 287 EP - 294 PY - 2014 DA - 2014/09 SN - 47 DO - http://doi.org/10.4208/jms.v47n3.14.04 UR - https://global-sci.org/intro/article_detail/jms/9959.html KW - Negatively superadditive dependent random variables, Rosenthal type inequality, complete convergence. AB - Let $\{X_n,n\geq1\}$ be a sequence of negatively superadditive dependent (NSD, in short) random variables and $\{a_{nk}, 1\leq k\leq n, n\geq1\}$ be an array of real numbers. Under some suitable conditions, we present some results on complete convergence for weighted sums $\sum_{k=1}^na_{nk}X_k$ of NSD random variables by using the Rosenthal type inequality. The results obtained in the paper generalize some corresponding ones for independent random variables and negatively associated random variables. Yu Zhou, Fengxi Xia, Yan Chen & Xuejun Wang. (2019). Complete Convergence for Weighted Sums of Negatively Superadditive Dependent Random Variables. Journal of Mathematical Study. 47 (3). 287-294. doi:10.4208/jms.v47n3.14.04 Copy to clipboard The citation has been copied to your clipboard
2022-07-05 12:12:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44980481266975403, "perplexity": 1907.070918568526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104576719.83/warc/CC-MAIN-20220705113756-20220705143756-00551.warc.gz"}
https://www.transtutors.com/questions/when-choosing-one-course-of-action-while-working-with-a-dilemma-the-other-courses-of-1386025.htm
# When choosing one course of action while working with a dilemma, the other courses of action are... When choosing one course of action while working with a dilemma, the other courses of action are lost and become unavailable. This makes ethical choices in dilemma situations particularly what? (Points : 5)IncoherentComplicatedIllogicalPainfulCruel
2018-09-22 05:33:12
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8548659086227417, "perplexity": 3093.3805434901597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158045.57/warc/CC-MAIN-20180922044853-20180922065253-00413.warc.gz"}
https://www.gamedev.net/forums/topic/660584-one-buffer-vs-multiple-buffers/
# OpenGL One Buffer Vs. Multiple Buffers... ## Recommended Posts Hello, Consider this a consultation of opinion question: Is it better to use one interleaved buffer for multiple buffers for vertex/normal/UV data? Was reading this question and response here: http://stackoverflow.com/questions/12245687/does-using-one-buffer-for-vertices-uvs-and-normals-in-opengl-perform-better-tha Just seeing what is better; I have implemented both but my framerate was high enough in both cases where I didn't notice a big difference. ##### Share on other sites Is it better to use one interleaved buffer for multiple buffers for vertex/normal/UV data? Yes to interleaved buffer - the AoS layout is preferable. Think about what is actually happening internally. You've got a warp of 32 elements being processed. Each one has a vertex input structure, which is going to be a single block of memory containing all of the vertex attributes. What the hardware wants to do is load the entire vertex into the registers of the processing cores. Most likely it's capable of doing this for an entire warp at once. Interleaved attributes allow it to do a single block memory copy to set up all of the vertices to run a warp. When I was working at NV (2006), it was often the case that non-interleaved streams would be soft interleaved by the driver as part of draw call setup before being rendered. I'm not sure if that's still required on modern hardware or not. But it's best to assume that the underlying hardware is in most cases only able to work with interleaved data. Current official advice is to de-interleave only when the update frequencies of the buffer are different. This advice may further distorted by situations which have lots of data transfer, meaning dynamic/streaming buffers. See L.Spiro's comments here: http://lspiroengine.com/?p=96 Personally I've never seen a reason to worry about this particular bandwidth issue but he probably has more 'in the field' experience than I do on PC platforms and may be able to expand on those comments. Edited by Promit ##### Share on other sites Excellent information, thank you. Very helpful. Other replies always welcome. ##### Share on other sites making separate buffers for attributes can find itself wise in case it allows you to exchange X vertex buffer changes of batched inteleved buffers per frame for a one multiple buffers binding over frame group, (or aplication entirely). Yet, in my eyes, driver ,or, does a sad cache coherency unfriendly delivery of vertex attributes , or, interleves the separate buffers to interleved one in the end, resulting in the same logic as rebinding 6 batched interleved vertex buffers per frame. Who knows ##### Share on other sites making separate buffers for attributes can find itself wise in case it allows you to exchange X vertex buffer changes of batched inteleved buffers per frame for a one multiple buffers binding over frame group In this situation, it's better to build all the interleaved variations manually before uploading them to the driver, then just pick the most appropriate vertex buffer. I have seen engines that know what shaders are used for what mesh and create the correct custom buffer for that situation. Always seemed like a hassle to me though. ##### Share on other sites This is very interesting, because i've got very different results (gtx 670 and 480). My use case is a compute shader tree traversal, and the nodes have vec4 data for position, color, direction and integer data packed in uvec4 (tree indices etc.) First i've used a single shader storage buffer the AoS way. That was too slow, so i tried to put each vec in its own texture, so SoA. Can't remember exactly but the speed up was 10-30 times i think. Does this make sense? I assumed using SoA is faster because multiple texture units can be used to grab the data. The other fact is that i do not need to read all the data for any node i visit, which is a difference to the fixed vertex pipeline example from above. I do not need to read node direction if position is already too far away etc., and in AoS method i did read the full struct in any case before any test. But i doupt this alone explains the huge speed up. Please let me know what you think, i'm new to gpu and it's still hard to predict performance. And there are crazy things happening, f. ex. sometimes it's faster to reserve shared memory without using it. Sounds stupid, but it's true, especially for simple shaders. Someone discovered this before and posted on NV forum, but no official responce. I assume the reason is reserving shared memory prevents the thread sheduler from doing too much task switching. I don't know if this happens on other languages too (Cuda, CL, DX). Other crazy things are: It's faster to do a blur on the tree with all 4 children, 4 neighbours and parent, than to do a simple color averaging from children to parent on the same tree. ??? It's >2x faster to do a stackless but very divergent tree traversal one thread per node, than to do a perfect data / code / runtime coherent parallel traversal using a satck (stack is too big for shared memory). I really have the feeling that drivers are not well polished for compute shader performance, but it's a little too much work to port to cuda to see the difference... Any thoughts welcome :) ##### Share on other sites Please let me know what you think, i'm new to gpu and it's still hard to predict performance. Experts can't predict performance either. We're just driven by common guidelines AND THEN WE TRY OUR OPTIONS AND PROFILE. Computing (in general, i.e. not just GPUs, also CPUs, and RAM, and caches, etc) has become so complex it's virtually impossible to accurately predict what approach is going to be faster (although we can make educated guesses). For example, I've seen an example where adding extra instructions to a CPU routine in a tight loop caused the loop to execute faster in Haswell chips. The reason had to do with a "forward store to load" stall, where adding an instruction allowed to the CPU to prevent a full pipeline stall on every iteration. It is anecdotic and was a rather synthetic benchmark (not real world code), but the point is that it is completely unintuitive to think that adding an instruction would help the code run faster; which is a great example that modern architectures are so complex we can't grasp it all. Stop asking, just try, profile, and share the results. ##### Share on other sites Stop asking, just try, profile, and share the results. I agree; in this case I did try both and the results for me were initially the same BUT I keep reading everywhere that interleaved is better so for now I will work with interleaved buffers. ##### Share on other sites And there are crazy things happening, f. ex. sometimes it's faster to reserve shared memory without using it. This is actually not that uncommon. The problem is that the cores only have a limited amount of register space (64k per SMx core) which gets divided up by however many threads are running in parallel. So if you are running 1024 threads per SMx, every thread can use up to 64 registers. If you are running the maximum of 2048 threads, every thread only gets to use 32 registers. If more local variables are needed than registers are available, some registers are spilled onto the stack similarly to how it's done on the CPU. But contrary to the CPU, the GPU memory latencies are incredibly high so spilling stuff that is often needed onto the stack can increase the runtime. Now shared memory is also a restricted resource (64KB on kepler per SMx) but one that can't be spilled. So, if every block only needs less than 2KB, you can get the maximum of 32 resident blocks per SMx. But if you increase the amount of reserved shared memory, lets say to just below 4KB, then you can only have 16 resident blocks. Now, halving the amount of resident blocks also halves the total amount of resident threads, so each thread has twice the amount of registers at its disposal. So, increasing the amount of reserved shared memory can decrease the number of resident blocks/threads, which increases the number of registers each thread can use, which can reduce register spilling and costly loads from the stack. I don't know about compute shaders, but for cuda I believe the profiler can check for this. ##### Share on other sites This helps a lot (i copied this post to my source to read again later). It explains also why performance is a matter of number of threads vs. needed memory, which is what i've recoginezed the last months. There is one thing that i've got very wrong all the time: I thought a single SMX can only run 32 threads, and if you want more they would be spread across multiple SMX units. I really understand much better now :) Many thanks! ##### Share on other sites Experts can't predict performance either. We're just driven by common guidelines AND THEN WE TRY OUR OPTIONS AND PROFILE. But let's be fair - most of us can't afford test labs with multiple generations of GPUs, multiple versions of drivers, and so forth. Nevermind the time consumed in testing the different permutations. A different driver (or version) has the potential to randomly upheave the whole thing, and it's not like NV or AMD are about to help us out with real application profiles. So most of us are stuck with the handful of hardware/software configurations that are at hand, making educated guesses about the rest. I'm probably better off than most here - for my part I have access to a 6770, 7970, a GTX 480, a GTX 670, a Titan, and a couple Macbooks' worth of mobile GPUs. If this were strictly hobby work, I'd probably be lucky to even have one sample of NV and AMD available. ## Create an account Register a new account • ### Forum Statistics • Total Topics 627701 • Total Posts 2978708 • ### Similar Content • A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run. -What I'm using: C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL. -Questions Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it? • Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine. But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance? Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case. • By xhcao Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness. • By cebugdev hi guys, are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well. Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic let me know if you guys have recommendations. • By dud3 How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below? Mine behaves exactly the same way spherical coordinates would, I'm using euler angles. Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers. References: Code: https://pastebin.com/Hcshj3FQ The video shows the difference between blender and my rotation: • 21 • 14 • 12 • 10 • 12
2017-10-21 08:45:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18878494203090668, "perplexity": 1965.2191764935908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824675.67/warc/CC-MAIN-20171021081004-20171021101004-00224.warc.gz"}
https://mathspace.co/textbooks/syllabuses/Syllabus-452/topics/Topic-8361/subtopics/Subtopic-110114/?activeTab=theory
UK Primary (3-6) Identify and describe types of patterns from tables or graphs Lesson ## Data in tables When we see a table of data, often there is a relationship between the information in one column and the information in the other column. We can look for any relationships and then use this to work out other values. For example, when a group of Grade 6 runners raced against Grade 3 runners, their times were adjusted to make it fair. Their times are written in the table below. Actual times versus adjusted times (secs) 35 43 37 45 48 56 54 62 In Video 1, we will see how to look up information in this table and calculate an adjusted time for a runner whose actual time was $40$40 secs. ## Data on a graph This time we will look at data on a graph and see how two pieces of information are related to each other. We do that by looking at the corresponding values on the vertical and horizontal axes. Let's compare your age to your sister's age and see what happens as your sister gets $1$1 year older each year. You don't have a sister? That's okay, just imagine someone else in the example. ## Writing a rule Now we can write a rule to describe one set of data to another. We identify the rule and then use variables, or letters, to write it in a shorter way. Let's write a rule using variables for our previous example that tells us how old you are based on your sister's age. Information Variable Your age $y$y Your sister's age $x$x Let's see how we do this. ## Writing it a different way We looked at how to calculate your age, once we knew your sister's age. What if we wanted to write the rule in a different way? This time we will work out your sister's age if we know your age. How do you think our rule, or equation, might change? Let's have a look in our last video. Did you know? You've just worked through some algebra! #### Worked Examples ##### Question 1 A catering company uses the following table to work out how many sandwiches are required to feed a certain number of people. Fill in the blanks: Number of People Sandwiches $1$1 $5$5 $2$2 $10$10 $3$3 $15$15 $4$4 $20$20 $5$5 $25$25 • For each person, the caterer needs to make $\editable{}$ sandwiches. • For $6$6 people, the caterer would need to make $\editable{}$ sandwiches. ##### Question 2 Consider the pattern shown on this line graph: 1. If the pattern continues on, the next point marked on the line will be $\left(\editable{},\editable{}\right)$(,). 2. Fill in the table with the points from the graph, and the one you just found (the first one is filled in for you): $x$x-value $y$y-value $0$0 $0$0 $\editable{}$ $\editable{}$ $\editable{}$ $\editable{}$ $\editable{}$ $\editable{}$ 3. Choose all statements that correctly describe this pattern: The rule is $x=y\div4$x=y÷​4 A The rule is $y=4\times x$y=4×x B The rule is $y=x\div4$y=x÷​4 C As $x$x increases $y$y increases. D The rule is $x=y\div4$x=y÷​4 A The rule is $y=4\times x$y=4×x B The rule is $y=x\div4$y=x÷​4 C As $x$x increases $y$y increases. D ##### Question 3 Consider the pattern shown on this dot plot: 1. If the pattern continues on, the next point marked on the line will be $\left(\editable{},\editable{}\right)$(,). 2. Fill in the table with the points from the graph, and the one you just found (the first one is filled in for you): $x$x-value $y$y-value $0$0 $0$0 $\editable{}$ $\editable{}$ $\editable{}$ $\editable{}$ $\editable{}$ $\editable{}$ 3. Choose all statements that correctly describe this pattern: The rule is $x=y\div2$x=y÷​2 A The rule is $y=x\div2$y=x÷​2 B As $x$x increases $y$y increases. C The rule is $y=2\times x$y=2×x D The rule is $x=y\div2$x=y÷​2 A The rule is $y=x\div2$y=x÷​2 B As $x$x increases $y$y increases. C The rule is $y=2\times x$y=2×x D
2021-09-18 19:30:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49364709854125977, "perplexity": 1763.6510395451364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056572.96/warc/CC-MAIN-20210918184640-20210918214640-00219.warc.gz"}
https://www.iacr.org/cryptodb/data/paper.php?pubkey=30807
## CryptoDB ### Paper: Message-recovery Laser Fault Injection Attack on the Classic McEliece Cryptosystem Authors: Pierre-Louis Cayrel , Univ. Lyon, UJM-Saint-Etienne, CNRS, Laboratoire Hubert Curien UMR 5516, F-42023, Saint-Etienne, France Brice Colombier , Univ. Grenoble Alpes, CNRS, Grenoble INP, TIMA, Grenoble, France Vlad-Florin Dragoi , Department of Mathematics and Computer Sciences, Aurel Vlaicu University of Arad, Bd. Revolutiei, No. 77, 310130-Arad, Romania Alexandre Menu , IMT, Mines Saint-Etienne, Centre CMP, Equipe Commune CEA Tech - Mines Saint-Etienne F-13541 Gardanne FRANCE Lilian Bossuet , Univ. Lyon, UJM-Saint-Etienne, CNRS, Laboratoire Hubert Curien UMR 5516, F-42023, Saint-Etienne, France DOI: 10.1007/978-3-030-77886-6_15 (login may be required) Search ePrint Search Google Slides EUROCRYPT 2021 Code-based public-key cryptosystems are promising candidates for standardization as quantum-resistant public-key cryptographic algorithms. Their security is based on the hardness of the syndrome decoding problem. Computing the syndrome in a finite field, usually $\F_{2}$, guarantees the security of the constructions. We show in this article that the problem becomes considerably easier to solve if the syndrome is computed in $\mathbb{N}$ instead. By means of laser fault injection, we illustrate how to force the matrix-vector product in $\mathbb{N}$ by corrupting specific instructions, and validate it experimentally. To solve the syndrome decoding problem in $\mathbb{N}$, we propose a reduction to an integer linear programming problem. We leverage the computational efficiency of linear programming solvers to obtain real-time message recovery attacks against all the code-based proposals to the NIST Post-Quantum Cryptography standardization challenge. We perform our attacks on worst-case scenarios, i.e. random binary codes, and retrieve the initial message within minutes on a desktop computer. Our practical evaluation of the attack targets the reference implementation of the Niederreiter cryptosystem in the NIST finalist \textit{Classic McEliece} and is feasible for all proposed parameters sets of this submission. For example, for the 256-bit security parameters sets, we successfully recover the plaintext in a couple of seconds on a desktop computer Finally, we highlight the fact that the attack is still possible if only a fraction of the syndrome entries are faulty. This makes the attack feasible even though the fault injection does not have perfect repeatability and reduces the computational complexity of the attack, making it even more practical overall. ##### BibTeX @inproceedings{eurocrypt-2021-30807, title={Message-recovery Laser Fault Injection Attack on the Classic McEliece Cryptosystem}, publisher={Springer-Verlag}, doi={10.1007/978-3-030-77886-6_15}, author={Pierre-Louis Cayrel and Brice Colombier and Vlad-Florin Dragoi and Alexandre Menu and Lilian Bossuet}, year=2021 }
2021-12-05 08:13:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3220014274120331, "perplexity": 3513.6376800624357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363149.85/warc/CC-MAIN-20211205065810-20211205095810-00432.warc.gz"}
https://math.stackexchange.com/questions/2844990/show-that-left-left-x-x-frac1x-frac1x-right-circ-right
# Show that $\left(\left\{x, -x, \frac{1}{x}, -\frac{1}{x}\right\}, \circ\right)$ is a group. Given the following problem: Given the functions $g_1, g_2, g_3, g_4$ of $\mathbb{R}^*$ in $\mathbb{R}^*$ defined in the following way: $g_1(x) = -x$, $g_2(x) = -\frac{1}{x}$, $g_3(x) = x$ and $g_4(x) = \frac{1}{x}$. If $G = \{g_1, g_2, g_3, g_4\}$: 1. Show that $(G, \circ)$ is a group where $\circ$ is the composition of functions. Write the table. 2. Identify a generator set of $(G, \circ)$ that has the least number of elements possible. 3. Extract all the normal subgroups of $(G, \circ)$. If $H$ is one of them, describe $G$ \ $H$. I am having lots of problems figuring out how proceed with such a set. If I understand correctly, the composition of functions is for example: $$(\forall x\in\mathbb{R}^*):\enspace(g_1 \circ g_2)(x) = g_1(g_2(x)) = g_1(-\frac{1}{x}) = -(-\frac{1}{x}) = \frac{1}{x}$$ Is that so? Then I know that to prove that $(G, \circ)$ is a group, I have to prove the following: 1. The internal law: $g_i \circ g_j \in G$. 2. Associativity: $g_i \circ (g_j \circ g_k) = (g_i \circ g_j) \circ g_k$. 3. Existence of the neutral element $g_e$ such that: $g_i \circ g_e = g_e \circ g_i = g_i$. 4. Existence of an inverse element $g_i'$ for each $g_i \in G$, we have that $g_i \circ g_i' = g_i' \circ g_i = g_e$ But how do I proceed with such a set? • I think you should limit this question to proving only $(1)$, listed in the highlighted problem statement. Also, you are correct about your understanding of the composition of two functions. I changed some wording to indicate the English version of, e.g., asociatividad, which is associativity And the word you needed in the fourth group axiom needs to be "inverse element". You define the inverse element very well. – amWhy Jul 8 '18 at 21:19 • @amWhy Thanks for the corrections. Some words got lost in my translation. As it relates to focusing on $(1)$ of the problem, any specific reason? – Omari Celestine Jul 8 '18 at 21:26 • Since you have only four elements, it is reasonable to proceed as you did when checking your understanding of composition of functions. E.g., you have already shown that $g_1 \circ g_2 = g_1(g_2(x)) = g_1(-\frac{1}{x}) = -(-\frac{1}{x}) = \frac{1}{x} = g_4\in G$. The composition of each function with another needs to be one of four elements in G. Recall that for function composition, given functions $f, h$, it is not always true that $f\circ h = h\circ f$, so it would be good to check out $g_1 \circ g_2$ as well. – amWhy Jul 8 '18 at 21:28 • @amWhy so should I edit the question and remove the others? – Omari Celestine Jul 8 '18 at 21:33 • That's what I would suggest. Else you might get just quick suggestions for each point, in an answer, where I think you might find it more helpful to get thorough help for each section. And I think you can be working on your table that you need, as you confirm that the group is closed under function composition. (the internal law.) Of course, if you feel the answer before answers all your questions, then you are free to keep it as is. I was simply suggesting what I thought might be more helpful to you. – amWhy Jul 8 '18 at 21:36 Compute the Cayley table, taking into account that $g_3$ is the identity function: \begin{array}{c|cccc} & g_3 & g_1 & g_2 & g_4 \\ \hline g_3 & g_3 & g_1 & g_2 & g_4 \\ g_1 & g_1 & g_3 & g_4 & g_2 \\ g_2 & g_2 & g_4 & g_3 & g_1\\ g_4 & g_4 & g_2 & g_1 & g_3 \end{array} You can notice that each element is the inverse of itself, so if this is a group it must be the Klein $4$-group $\{1,a,b,c\}$ where $a^2=1$, $b^2=1$, $c^2=1$, $ab=ba=c$, $bc=cb=a$ and $ca=ac=b$: the Cayley table is \begin{array}{c|cccc} & 1 & a & b & c \\ \hline 1 & 1 & a & b & c \\ a & a & 1 & c & b \\ b & b & c & 1 & a \\ c & c & b & a & 1 \end{array} As you see the map $g_3\mapsto 1$, $g_1\mapsto a$, $g_2\mapsto b$, $g_4\mapsto c$ is an isomorphism of “magmas”. Since the latter is a group, also the given set is a group. Let's do this step by step: 1. This is just a matter of doing every possible composition. It probably can me done more elegantly, but you're being asked to write the multiplication table anyway. 2. This is true in general. Since the equality $$f \circ ( g \circ h) = (f \circ g) \circ h$$ is an equality of functions, one must see that it holds for every element in the domain. Thus, in this case, if $x \in \mathbb{R}^*$, $$(f \circ ( g \circ h))(x) = f((g\circ h)(x)) = f(g(h(x)) = (f \circ g)(h(x)) = ((f \circ g) \circ h)(x)$$ which proves the former. 1. Note that the function $e : x \in \mathbb{R}^* \mapsto x \in \mathbb{R}^*$ is the identity function on $\mathbb{R}^*$. Thus for any other such function (in particular the ones of this group), $$(f \circ e)(x) = f(e(x)) = f(x) = e(f(x)) = (e \circ f)(x)$$ and therefore $e \circ f = f = f \circ e$. 1. Again, when making the multiplication table, this will be a lot clearer. As for the second and third questions, one can see that every group of $4$ elements is abelian (i.e. it is commutative), so every subgroup will be normal. Moreover, there are exactly $2$ groups of $4$ elements up to isomorphism: $\mathbb{Z}_4$ and $\mathbb{Z}_2 \oplus \mathbb{Z}_2$. The former is cyclic so it can be generated by a single element, whereas the second needs at least $2$ and that is sufficient. Note that in your group, any function $g_i\in G$ verifies that $g_i \circ g_i = e$, so the group can't be generated by a single element (each set $\langle g_i \rangle$ would have at most 2 elements). Therefore, you'll have to look for a set of two generators. This, in particular. tells you that $G \simeq \mathbb{Z}_2 \oplus \mathbb{Z}_2$ so you will have the trivial subgroups $\{e\}$ and $G$, and three subgroups of order $2$. I'll leave you to identify these. • Note that wrt your last paragraph, $\langle g_3\rangle = \{g_e\}$ with only one element, where $g_3 = g_e = x$ is the identity function, hence has order one. So not all elements of $G$ generate a subgoup of order two, as you suggest. – amWhy Jul 8 '18 at 22:19 • @amWhy sorry, I meant to say 'at most'. Fixed. – qualcuno Jul 8 '18 at 22:32 • No problem, Guida A.! (I'm pretty prolific in typing typos, or leaving out one or two word, even though I'm thinking them!) – amWhy Jul 8 '18 at 22:34
2020-07-06 17:08:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9737173914909363, "perplexity": 149.57993909381835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655881763.20/warc/CC-MAIN-20200706160424-20200706190424-00429.warc.gz"}
https://tug.org/pipermail/pstricks/2007/004648.html
Wed Jul 11 22:21:50 CEST 2007 Randy Yates wrote: > Boris Veytsman <borisv at lk.net> writes: > >> HV> From: Herbert Voss <herbert49 at googlemail.com> >> HV> Date: Wed, 11 Jul 2007 08:27:55 +0200 >> >> HV> Randy Yates wrote: >>>> Is is possible to do this? For example, let's say you have a block >>>> that when you click on a block (actually click inside the pspolygon >>>> area in the figure itself), it opens that block's design document. >> HV> http://tug.org/PSTricks/main.cgi?file=Examples/misc#hyperref >> >> In this example the whole image becomes a hyperlink. What is more >> challenging (and interesting) is to make certain areas inside >> pspicture to be hyperlinks, like imagemap in html > > Yes, and that's what I need. My diagram has several blocks and I > want each block to be a different hyperlink. > > Unless I enclose each one in a pspicture environment? Sounds like no, you need it. By default all pspicture objects have no width and height, hence they cannot act as a hyperlink. Only a box can, like \pspciture ... If you can divide your polynom in a sequence of boxes, then you HErbert \documentclass{article} \usepackage{pstricks} %\usepackage{pst-pdf} \usepackage{hyperref} \begin{document} The following image \href{http://PSTricks.tug.org}{% \begin{pspicture}(-2,-2)(2,2)% \pscircle[fillstyle=vlines,linecolor=blue](0,0){2} \end{pspicture}} \vspace{2cm} \begin{pspicture}(-3,-3)(3,3)% \psset{linewidth=2pt} \rput(-3,0){\href{http://PSTricks.tug.org}{\pspicture(3,3)\psline[linecolor=red](3,3)\endpspicture}} \rput(0,0){\href{http://www.tug.org}{\pspicture(3,3)\psline[linecolor=green](0,3)(3,0)\endpspicture}} \rput(0,-3){\href{http://www.tug.org/TeXnik}{\pspicture(3,3)\psline[linecolor=blue](3,3)\endpspicture}} \rput(-3,-3){\href{http://www.dante.de}{\pspicture(3,3)\psline[linecolor=yellow](0,3)(3,0)\endpspicture}} \end{pspicture} \end{document} -- http://PSTricks.tug.org http://www.dante.de/CTAN/info/math/voss/
2023-03-23 08:43:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.819085955619812, "perplexity": 8396.909285995394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945030.59/warc/CC-MAIN-20230323065609-20230323095609-00734.warc.gz"}
https://www.techwhiff.com/issue/factors-of-8x-20y-10--186480
# Factors of 8x- 20y-10 1 answer ###### Question: Factors of 8x- 20y-10 ## Answers 2 answers ### The measures of the angles of a triangle are shown in the figure below. Solve for x. The measures of the angles of a triangle are shown in the figure below. Solve for x.... 1 answer ### Hurry pls, And thank you. Hurry pls, And thank you.... 1 answer ### 50 POINTS!!! PLEASE SOLVE WITH STEPS. THANK YOU! 50 POINTS!!! PLEASE SOLVE WITH STEPS. THANK YOU!... 1 answer ### The plot of land illustrated in the accompanying diagram has a perimeter of 34 yards. Find the length, in yards, of each side of the figure The plot of land illustrated in the accompanying diagram has a perimeter of 34 yards. Find the length, in yards, of each side of the figure... 2 answers ### The height of a cylinder shaped storage container is 8.7 inches and the diameter is 6 inches . What is the volume in cubic inches of the storage container The height of a cylinder shaped storage container is 8.7 inches and the diameter is 6 inches . What is the volume in cubic inches of the storage container... 1 answer ### In 20 words or fewer what are some examples of institution in 20 words or fewer what are some examples of institution... 1 answer ### MATCH ALL OF THESE CORRECTLY AND GET BRAINLIEST MATCH ALL OF THESE CORRECTLY AND GET BRAINLIEST... 2 answers ### For BRAINLIST !! Which table contains only values that satisfy the equation y = 6x +2? For BRAINLIST !! Which table contains only values that satisfy the equation y = 6x +2?... 1 answer ### What does Tom do to Myrtle when she repeats Daisy's name in the Great Gatsby? What does Tom do to Myrtle when she repeats Daisy's name in the Great Gatsby?... 1 answer ### 30pts!!! and ill mark brainliest In Rutherford's gold foil experiment, the alpha particles that traveled through the gold foil hit the screen behind it in more than one spot. Explain the significance of this finding and the discovery it led to. 30pts!!! and ill mark brainliest In Rutherford's gold foil experiment, the alpha particles that traveled through the gold foil hit the screen behind it in more than one spot. Explain the significance of this finding and the discovery it led to.... 2 answers ### Solve: 5x = -45x = ?? solve: 5x = -45x = ??... 2 answers ### Loyalty leads the women to tragedy in othello. discuss this statement with particular reference to the women in othello loyalty leads the women to tragedy in othello. discuss this statement with particular reference to the women in othello... 1 answer ### Instructions: Find the angle measures given the figure is a rhombus. Instructions: Find the angle measures given the figure is a rhombus.... 1 answer ### Part A: Enter values for the ratio of yx for the equation y=2x. x y Value of y/x −1 −2 1 2 2 4 1. ___ 2.___ 4.____ Part A: Enter values for the ratio of yx for the equation y=2x. x y Value of y/x −1 −2 1 2 2 4 1. ___ 2.___ 4.____... 1 answer 1 answer ### A 100 watt bulb with 60 volts has a current flow of how many Amps? A 100 watt bulb with 60 volts has a current flow of how many Amps?... 1 answer ### Jane told Jill that the name "sunflower' comes from the fact that sunflowers follow the sun in the sky throughout the day. Jane said that sunflowers actually turn around on their stems to find the sun. To test this idea, Jane and Jill designed an experiment. They grew a pot of sunfowers outside in natural sunlight, one pot in the house near, but not directly in front of a window, and the final pot inside the house, away from all windows. All other growing conditions were the same: soil, water, p Jane told Jill that the name "sunflower' comes from the fact that sunflowers follow the sun in the sky throughout the day. Jane said that sunflowers actually turn around on their stems to find the sun. To test this idea, Jane and Jill designed an experiment. They grew a pot of sunfowers outside in n... 2 answers ### Slide 1. 1) Find the area of the metal sheet below that is required to make this square-shaped traffic sign. Show all work on paper. + 7x-3 Slide 1. 1) Find the area of the metal sheet below that is required to make this square-shaped traffic sign. Show all work on paper. + 7x-3... 2 answers ### Evaluate -3 to the third power Evaluate -3 to the third power... 1 answer ### Can someone help with 13 and 14 I will thank you Can someone help with 13 and 14 I will thank you... -- 0.015621--
2023-03-20 09:19:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3152705729007721, "perplexity": 1962.583464002038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943471.24/warc/CC-MAIN-20230320083513-20230320113513-00005.warc.gz"}
https://cs.stackexchange.com/questions/136296/show-that-the-language-is-irregular
# Show that the Language is irregular I was solving some problem from past test, there was this question: Use the closure property of regular language to show the language $$L$$ is not regular $$L =\{ a^3 b^n c^{n-3} \mid n>3\}$$ I know how to solve it using Pumping Lemma, but how to solve it with closure property? Also, I know that to prove that a language L is not regular using closure properties, the technique is to combine L with regular languages by operations that preserve regularity in order to obtain a language known to be not regular, but I can't figure out with whom to combine. I saw some similar questions but didn't get ideas about how to do this one. Any hints? Thanks. Suppose that $$L$$ is regular. Then so is $$L' = L \circ \{ccc\} = \{a^3 b^n c^n \mid n \ge 3\}$$. This implies that $$L'' = (aaa)^{-1}L' = \{b^n c^n \mid n \ge 3\}$$ is also regular and, in turn, that $$L''' = L'' \cup \{\varepsilon, bc, bbcc\} = \{b^n c^n \mid n \in \mathbb{N}\}$$ is regular. This is a contradiction since $$L'''$$ is known not to be regular. • Thanks! I haven't done inverse operations by now, can you please give an intuitive explanation of what $(aaa)^{-1}$ means? – Dhruv Joshi Mar 7 at 18:18 • $(aaa)^{-1} L'$ denotes the left quotient of $L'$ by $aaa$ and is defined as the language containing all words $x \in \Sigma^*$ such that $aaax \in L'$. To see that regular languages are closed under this operation, pick a DFA $D$ for $L'$ and feed the word $aaa$ to it. You end up in some state $q$ of $D$. Construct a new DFA $D'$ that is a copy of $D$ except that the initial state is now changed to $q$. The words accepted by $D'$ are exactly those in $(aaa)^{-1} L'$ (this strategy works in general: for $\alpha \in \Sigma^*$ and some language $L$ you can show that $\alpha^{-1}L$ is regular). – Steven Mar 7 at 18:23 • Intuitively: you are taking all words of $L'$ that start with $aaa$ and dropping the $aaa$ prefix. What you are left with is $(aaa)^{-1}L'$. – Steven Mar 7 at 18:28 • Alternatively, if you do not want to use left quotient to remove the symbols $a$ at the start of the string, you can use the operation of homomorphism which is able to replace letters by new strings. In this case, replace every $a$ by the empty string $\varepsilon$, effectively deleting all $a$'s. – Hendrik Jan Mar 7 at 20:00
2021-05-07 09:50:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7684105038642883, "perplexity": 160.88763905783327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.80/warc/CC-MAIN-20210507090724-20210507120724-00416.warc.gz"}
https://danieltakeshi.github.io/2015/05/14/moved-to-jekyll/
# A New Era This post marks the beginning of a new era for my blog. For almost four years (!), which saw 151 posts published (this post is #152), Seita’s Place was hosted by Wordpress.com. Over the past few months, though, I’ve realized that this isn’t quite what I want for the long run. I have now made the decision to switch the hosting platform to Jekyll. Many people others (e.g., Vito Botta and Tomomi Imura) have provided reasons why they migrated from Wordpress to Jekyll, so you can read their posts to get additional perspectives. In my case, I: • wanted to feel like I had more control over my site, rather than writing some stuff and then handing it over to a black-box database to do all the work. • wanted to write more in Markdown and use git/GitHub more often, which will be useful for me as I continue to work in computer science. • wanted to more easily be able to write code and math in my posts. • wanted to use my own personal text editor (vim is my current favorite) rather than Wordpress’s WYSIWYG editor. • wanted to be more of a hacker and less like the “ignorant masses,” no offense intended =). Jekyll, which was created by GitHub founder Tom Preston-Werner1, offers a splendid blogging platform with minimalism and simplicity in mind. It allows users like me to write posts in plain text files using Markdown syntax. These are all stored in a _posts file in the overall blog directory. To actually get the site to appear online, I can host it on my GitHub account; here is the GitHub repository for this site2. By default, such sites are set to have a URL at username.github.io, which for me would be danieltakeshi.github.io. That I can use GitHub to back up my blog was a huge factor in my decision to switch over to Jekyll for my blog. There’s definitely a learning curve to using Jekyll (and Markdown), so I wouldn’t recommend it for those who don’t have much experience with command-line shenanigans. But for me, I think it will be just right, and I’m happy that I switched. # How Did I Migrate? Oh boy. The migration process did not go as planned. I was hoping to get that done in about three hours, but it took me much longer than that, and the process spanned about four days (and it’s still not done, for reasons I will explain later). Fortunately, since the spring semester is over, there is no better time for me to work on this stuff. Here’s a high-level overview of the steps: • Migrate from Wordpress.com to Wordpress.org. • Migrate from Wordpress.org to Jekyll • Proofread and check existing posts The first step to do is one that took a surprisingly long time for me: I had to migrate from Wordpress.com to Wordpress.org. It took me a while to realize that there even was a distinction: Wordpress.com is hosted by Wordpress and they handle everything (including the price of hosting, so it’s free for us), but we don’t have as much control over the site, and the extensions they offer are absurdly overpriced. Wordpress.org, on the other hand, means we have more control over the site and can choose a domain name to get rid of that ugly “wordpress” text in the URL. Needless to say, this makes Wordpress.org extremely common among many professional bloggers and organizations. In my case, I had been using Wordpress.com for seitad.wordpress.com, so what I had to do was go to Bluehost, pay to create a Wordpress.org site, which I named seitad.com, and then I could migrate. The migration process itself is pretty easy once you’ve got a Wordpress.org site up, so I won’t go into detail on that. The reason why I used Bluehost is because it’s a recommended Wordpress provider, and on their website there’s a menu option that you can click to create a Wordpress.org site. Unfortunately, that’s about it for my praise, because I otherwise really hate Bluehost. Did anyone else feel like Bluehost does almost nothing but shove various “upgrade feature XXX for $YZ” messages down our throats? I was even misled by their pricing situation and instead of paying$5 to “host” seitad.com for a month, I accidentally paid $71 to host that site for a year. I did notice that they had a 30-day money back guarantee, so hopefully I can hastily finish up this migration and request my money back so so I won’t have to deal with Bluehost again3. To clarify, the only reason why I am migrating to Wordpress.org is because the next step, using a Wordpress-to-Jekyll exporter plugin, only works on Wordpress.org sites, because Wordpress.com sites don’t allow external plugins to be installed. (Remember what I said earlier about how we don’t have much control over Wordpress.com sites? Case in point!) But before we do that, there’s a critical step we’ll want to do: change the permalinks for Wordpress to conform to Jekyll’s default style. A permalink is the link extension given to a blog post after the end of the site URL. For instance, suppose a site has address http://www.address.com. It might have a page called “News” that one can click on, and that could have address http://www.address.com/news, and news would be the permalink. Modifying permalinks is not strictly necessary, but it will make importing comments later easy. The default Wordpress.org scheme seems like it appends a “p” followed by an integer, and then a question mark. We want to change it to match Jekyll’s default naming scheme, which is /year/month/day/title, and we can do that by modifying the “Permalinks” section in the Wordpress dashboard. Now let’s discuss that Wordpress-to-Jekyll exporter I recently mentioned. This plugin, created by GitHub staff member Ben Balter, can be found (you guessed it) on GitHub. What you need to do is go to the “Releases” tab and download a .zip file of the code; I downloaded version 2.0.1. Then unzip it and follow the instructions that I’ve taken from the current README file: 1. Place plugin in /wp-content/plugins/ folder 2. Activate plugin in WordPress dashboard 3. Select Export to Jekyll from the Tools menu Steps (2) and (3) shouldn’t need much explanation, but step (1) is the trickiest. The easiest way to do this is to establish what’s known as an FTP connection to the Wordpress.org server, with the “host name” field specified by the URL of the old site (in my case, seitad.com). What I did was download FileZilla, a free FTP provider, and used its graphical user interface to connect to my Wordpress.org site. Note that to connect to the site, one does not generally use his or her Wordpress.org’s login, but instead, one needs to use the login information from Bluehost4! Once I got over my initial confusion, I was able to “drag and drop” the wordpress-to-jekyll exporter plugin to the Wordpress site. You can see in the above image (of Filezilla) that I have the plugin in the correct directory on the remote site. Executing steps (2) and (3) should then result in a jekyll-export.zip file that contains the converted HTML-to-Markdown information about blog entries, as well as other metadata such as the categories, tags, etc. All right, now that we have our zip file, it’s time to create a Jekyll directory with the jekyll new danieltakeshi.github.io command, where danieltakeshi should be replaced with whatever GitHub username you have. Then take that jekyll-export.zip file and unzip it in this directory. This should mean that all your old Wordpress posts are now in the _posts directory, and that they are converted to Markdown, and that they contain some metadata. The importer will ask if you want to override the default _config.yml file; I chose to decline that option, so _config.yml was still set to be what jekyll new ... created for me. The official Jekyll documentation contains a tool that you can use to convert from Wordpress (or Wordpress.com) to Jekyll. The problem with the Wordpress.com tool is that the original Wordpress.com posts are not converted to Markdown, but instead to plain HTML. Jekyll can handle HTML files, but to really get it to look good, you need to use Markdown. I tried using the Wordpress.org (not Wordpress.com) tool on the Jekyll docs, but I couldn’t get it to work due to missing some Ruby libraries that later caused a series of dependency headaches. Ugh. I think the simplicity and how the posts actually get converted to Markdown automatically are the two reasons why Ben’s external jekyll plugin is so popular among migrators. At this point, it makes sense to try and commit everything to GitHub to see if the GitHub pages will look good. The way that the username.github.io site works is that it gets automatically refreshed each time you push to the master branch. Thus, in your blog directory, assuming you’ve already initialized a git repository there, just do something like $ git add . $git commit -m "First commit, praying this works..."$ git push origin master These commands5 will update the github repository, which automatically updates username.github.io, so you can refresh the website to see your blog. One thing you’ll notice, however, is that comments by default are not enabled. Moreover, old comments made on Wordpress.org are not present even with the use of Ben’s Wordpress-to-Jekyll tool. Why this occurs can be summarized as follows: Jekyll generates static pages, but comments are dynamic. So it is necessary to use an external system, which is where Disqus comes into play. Unfortunately, it took me a really long time to figure out how to import comments correctly. I’ll summarize the mini-steps as follows: • In the Admin panel for Disqus, create a new website and give it a “shortname” that we will need later. (For this one, I used the shortname seitasplace.) • In the Wordpress.org site, install the Disqus comment plugin6 and make sure your comments are “registered” with Disqus. What this means is that you should be able to view all comments in your blog from the Disqus Admin panel. • Now comes the part that I originally missed, which took me hours to figure out: I had to import the comments with Disqus! It seems a little confusing to me (I mean, don’t I already have comments registered?), but I guess we have to do it. On Disqus, there is a “Discussions” panel, and then there’s a sub-menu option for “Import” (see the following image for clarification). There, we need to upload the .xml file of the Wordpress.org site that contains all the comments, which one can obtain without a plugin by going to Tools -> Export in the Wordpress dashboard. • You will also likely need to do some URL mapping. Comments in Disqus are stored relative to a URL, and the default URL is obviously the source from where it was imported! But if we’re migrating from source A to source B, doesn’t it make sense to have the comments default to source B’s URL instead of source A? In my case, I used a mapper in the “Tools” menu (in the above image) to convert all the comment links to be based on the current site’s URL. That way, if the original source (i.e., the Wordpress site) gets deleted, we still retain the comments7. If you made sure the permalinks match, then this process should be pretty easy. • Finally, the last thing to do is to actually install Disqus comments in the code for wordpress. For that, I went to the “Universal Code” option for Disqus, and pasted the HTML code there into the _layouts/post.html file. After several restarts due to some random issues with Disqus/Wordpress having problems with deleted material, I was finally able to get comments imported correctly, and they had the same names assigned to the commentators! Excellent! The traceback comments, which are created by Wordpress when one blog post links to another blog post, did not get copied over here, but I guess that’s OK with me. I mostly wanted the human comments, for obvious reasons. Whew! So we are done, right? Oh, never mind – we have to proofread each post! Since I had 151 posts from Wordpress to import, that meant I had to proofread every single one of them. Ben’s importer tool is good but not perfect, and code- or math-heavy posts are especially difficult to convert correctly. Even disregarding code and math, a common issue was that italicized text wouldn’t get parsed correctly. Sometimes the Markdown asterisks were “one space too ahead”, e.g., if the word code needs to be italicized, the Markdown syntax for that is *code*, but sometimes the importer created *code *, and that danging space can create some ugly asterisks visible in the resulting HTML. Even after basic proofreading, there are still additional steps one needs to take in order to ensure a crisp blog. One needs to • fix the links for the images, since the images by default are set to the original Wordpress address. The Wordpress-to-Jekyll plugin will put the images in the wp-content folder, but I (and the official Jekyll documentation) recommend copying those images over to an assets folder. The default wp-content folder contains too many folders and sub-directories for my liking, but I guess it’s useful if a blog contains thousands of images. • fix the post-to-post hyperlinks in each post to refer to the current Jekyll version. In vim, this should be easy as I can do a bunch of find-and-replace calls to each file. Ensuring that the Wordpress permalinks follow Jekyll-style permalinks makes this task easier. • incorporate extra tools to get LaTeX formatting. I haven’t been able to do all these steps yet, but I’m working on it8. Whew! The best part about the migration is that you only have to do it once. Part of the problem is that I had to rely on a motley collection of blog posts to help me out. The Jekyll documentation itself was not very helpful9. # Post-Migration Plan In addition to the actual migration, there are some sensible steps that users should take to ensure that they can extract maximal utility from Jekyll. For me, I plan to • learn more Markdown10! And in addition, it makes sense to use a text editor that can handle Markdown well. I’m using vim since it’s my default, but it’s actually not that useful to me, because I set the syntax mode off (using :syntax off) and by default vim does not have a Markdown highlighter. I’m sure someone has created a Markdown syntax add-on to vim, so I’ll search for that. • actually make the site look better! I don’t mind the simplicity of default Jekyll, but a little more “piazza” wouldn’t hurt. I’d like to at least get a basic “theme” up and running, and to include excerpts from each post on the front page. • make a site redirect from my old Wordpress.com site, so that it redirects users to this site. I’d rather not delete the old site all of a sudden (even though I will delete it eventually). But I will get rid of that Wordpress.org site that I had to pay to create, all just to help me migrate to Jekyll. Incidentally, now that we’ve covered the migration pipeline, I thought I should make it clear how one would go about using Jekyll. To add new posts, one simply adds a file in the _posts directory that follows the convention YYYY-MM-DD-name-of-post.ext and includes the necessary front matter, which contains the title, the date, etc. Looking at the raw Markdown code of sample posts is probably the easiest way to learn. One could update the site with each edit by adding, committing, and pushing to GitHub, but probably a better way is to update locally by running jekyll build; jekyll serve. This will create a local copy of Jekyll that one can have open in a web browser even if one doesn’t have Internet access. Each time one saves a post, the server will update, so by refreshing, we can see the edit. It won’t catch all edits — I had to push to GitHub and then update the local copy to get images to show up correctly — but it is useful enough that I thought I’d share (and suggest) it. Furthermore, if the website is public, it’s best to update/push polished versions of posts rather than works-in-progress. Hopefully these personal notes prove useful to future Wordpress.{com,org}-to-Jekyll migrators. In the meantime, I’m going to fix up the rest of this site and prepare some new entries that accentuate some of Jekyll’s neat features. 1. By the way, saying something like “GitHub co-founder Tom …” is the computer programming equivalent of the law school saying “Yale Law School graduate Bob …”. The fact that he founded GitHub immediately hightens my opinion of him. Oh, by the way, do you like how Jekyll does footnotes? Just click the little arrow here and you’ll be back up to where you were in the article! 2. If you have experience using GitHub, then you can even fork my repository on GitHub to serve as a launching point for your own site or to test out Jekyll. 3. Just to be clear, if you host a site on a public GitHub repository, then it’s free. That’s yet another reason to use Jekyll/GitHub! 4. This username information should be included in the first email (or one of the first, I think) you got from Bluehost. The password should be the password you use to log into Bluehost to get to your user control panel. 5. If you’re wondering how I was able to get a code block highlighted like that, I wrap the commands with three tildas (~~~) before and after the text. This is with the kramdown Markdown scheme. 6. Fortunately, you can find this plugin by searching in Wordpress directly; there’s no need to engage in fancy FTP stuff. 7. Actually, I haven’t tested this yet. I hope this works. 8. Interestingly enough, the Jekyll docs for migrating from Wordpress.com to Jekyll currently link to an external blog post from March 2011. I found that blog post to be reasonably helpful, but it didn’t really do what I needed, which tends to be a problem when following such guides. 9. To add to the complexity, there are several different versions of Markdown. My site is currently using the kramdown style, but another popular one (that GitHub pages use) is redcarpet, but that style messed up my footnotes, so I eschewed from using it.
2021-09-27 18:19:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20788125693798065, "perplexity": 1548.297503809419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058467.95/warc/CC-MAIN-20210927181724-20210927211724-00287.warc.gz"}
http://math.eretrandre.org/tetrationforum/showthread.php?tid=83&pid=743&mode=threaded
• 0 Vote(s) - 0 Average • 1 • 2 • 3 • 4 • 5 Matrix-method: compare use of different fixpoints Gottfried Ultimate Fellow Posts: 787 Threads: 121 Joined: Aug 2007 11/11/2007, 06:05 PM (This post was last modified: 11/11/2007, 06:16 PM by Gottfried.) Hi Henryk, I needed some days for an answer, please excuse that. I had some difficulties to concentrate on the subject, but now, here it goes .... bo198214 Wrote:I dont understand this attitude (whatever you mean by "truncated by principle"). Hmm, let me recall the question: bo198214 Wrote:We have something that is called the matrix operator method, this truncates $B_b$ (the Carleman matrix of $b^x$) to nSince you put the focus at this I assumed, that this is some principal aspect of the approach... Quote:then decomposes uniquely via Eigenvalues ... and I assume now, this is the difference. I would call this "practical approach"; it is useful for first approximations and gives good results for a certain range of parameters b,x and h (base,top-exponent and height) One can approximate powers, fractional powers, and we could see, that even with low dimensions, fractional powers could be iterated and they even provide very good results, when multiples of integer powers were approached. I computed a lot of well approximated examples to support the general idea of an eigensystem-decomposition via this practical truncations. But in my view, this was always only a rough approximation, whose main defect is, that we "don't know about the quality of approximation". Indeed, the base-matrix Bb is a truncation of an exact ideally infinite matrix, so its actual entries are not affected by the size of the truncation, and thus is the best starting point. Numerical eigensystem solutions for these truncated matrices satisfy, for instance, that W(truncation)*W(truncation)^-1 = I, as a perfect diagonal matrix, and satisfy good aproximations for some powers. So these good properties suggest to use the eigensystem based on the truncated Bb and give it the status of an own "method". However, the structure of the set of eigenvalues is not as straightforward as one would hope. Especially for bases outside the range e^(-e) < b < e^(1/e) the set of eigenvalues has partially erratic behaviour, which makes it risky to base assumptions about the final values for the tetration-operation T on them. For instance, I traced the eigenvalues for the same baseparameter, but increasing size of truncation, to see, whether we find some rules, which could be exploites for extrapolation. See Eigenvalues b=0.5 or Eigenvalues t=1.7or the page Grpah..Eigenvalue e^(1/e)for instance. Thus the need for a solution of exact eigensystem (infinite size) occurs. If we find one, then again the truncations lead to approximations - but we are in a situation to make statements about the bounds of error etc. This means using W(B) as eigenmatrix/set of eigenvectors of B, we do not deal with W(truncated(B)) but truncated(W(B))). My goal with my matrix-method is, to have an infinite matrix W(B), whose columns W(B)[col] satisfy together with the eigenvalue d[col] B * W(B)[col] = d[col] * W(B)[col] or W(B)^-1[row] * B = d[row] * W(B)^-1[row] seen as identity in the infinite case, and imprecise for the finite truncation. If a row in W(B)^-1 is also of the form of a powerseries, then this coincides furthermore with the concept of fixpoints, by definition of the properties of an eigenvector, and "marries" these two concepts and describes a common framework for them. Quote:${B_b}_{|n}=W_{|n} D_{|n} W_{|n}^{-1}$. And then defines ${B_b}^t = \lim_{n\to\infty} W_{|n} {D_{|n}}^t W_{|n}^{-1}$. And we get the coeffecients of ${\exp_b}^{\circ t}$ from the first column of $B_b$. I want your acknowledgement on this. If my previous comments indeed match your point, then I can take position in a possible dissens/consens. Gottfried Helms, Kassel « Next Oldest | Next Newest » Messages In This Thread Matrix-method: compare use of different fixpoints - by Gottfried - 11/04/2007, 12:38 PM RE: Matrix-method: compare use of different fixpoints - by bo198214 - 11/04/2007, 12:59 PM RE: Matrix-method: compare use of different fixpoints - by Gottfried - 11/04/2007, 01:28 PM RE: Matrix-method: compare use of different fixpoints - by bo198214 - 11/04/2007, 01:31 PM RE: Matrix-method: compare use of different fixpoints - by Gottfried - 11/04/2007, 01:40 PM RE: Matrix-method: compare use of different fixpoints - by bo198214 - 11/07/2007, 10:52 AM RE: Matrix-method: compare use of different fixpoints - by Gottfried - 11/07/2007, 01:33 PM RE: Matrix-method: compare use of different fixpoints - by bo198214 - 11/07/2007, 01:57 PM RE: Matrix-method: compare use of different fixpoints - by Gottfried - 11/07/2007, 02:10 PM RE: Matrix-method: compare use of different fixpoints - by bo198214 - 11/07/2007, 02:21 PM RE: Matrix-method: compare use of different fixpoints - by Gottfried - 11/07/2007, 02:59 PM RE: Matrix-method: compare use of different fixpoints - by bo198214 - 11/07/2007, 03:35 PM RE: Matrix-method: compare use of different fixpoints - by Gottfried - 11/07/2007, 04:31 PM RE: Matrix-method: compare use of different fixpoints - by bo198214 - 11/07/2007, 07:44 PM RE: Matrix-method: compare use of different fixpoints - by Gottfried - 11/07/2007, 08:41 PM RE: Matrix-method: compare use of different fixpoints - by bo198214 - 11/07/2007, 09:32 PM RE: Matrix-method: compare use of different fixpoints - by Gottfried - 11/11/2007, 06:05 PM RE: Matrix-method: compare use of different fixpoints - by bo198214 - 11/11/2007, 10:05 PM RE: Matrix-method: compare use of different fixpoints - by Gottfried - 11/12/2007, 01:53 AM RE: Matrix-method: compare use of different fixpoints - by Gottfried - 11/13/2007, 05:48 PM RE: Matrix-method: compare use of different fixpoints - by Gottfried - 11/12/2007, 07:48 AM RE: Matrix-method: compare use of different fixpoints - by bo198214 - 11/12/2007, 11:52 AM RE: Matrix-method: compare use of different fixpoints - by Gottfried - 11/12/2007, 03:13 PM RE: Matrix-method: compare use of different fixpoints - by andydude - 11/30/2007, 05:24 PM Possibly Related Threads... Thread Author Replies Views Last Post The Promised Matrix Add On; Abel_M.gp JmsNxn 2 561 08/21/2021, 03:18 AM Last Post: JmsNxn Revisting my accelerated slog solution using Abel matrix inversion jaydfox 22 30,989 05/16/2021, 11:51 AM Last Post: Gottfried Which method is currently "the best"? MorgothV8 2 7,711 11/15/2013, 03:42 PM Last Post: MorgothV8 "Kneser"/Riemann mapping method code for *complex* bases mike3 2 10,047 08/15/2011, 03:14 PM Last Post: Gottfried An incremental method to compute (Abel) matrix inverses bo198214 3 13,092 07/20/2010, 12:13 PM Last Post: Gottfried SAGE code for computing flow matrix for exp(z)-1 jaydfox 4 13,323 08/21/2009, 05:32 PM Last Post: jaydfox regular sexp:different fixpoints Gottfried 6 18,323 08/11/2009, 06:47 PM Last Post: jaydfox Convergence of matrix solution for base e jaydfox 6 14,368 12/18/2007, 12:14 AM Last Post: jaydfox Users browsing this thread: 1 Guest(s)
2021-12-01 03:23:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 6, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6537445783615112, "perplexity": 5623.794344286207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359082.78/warc/CC-MAIN-20211201022332-20211201052332-00462.warc.gz"}
http://www.mathnet.ru/php/archive.phtml?jrnid=faa&wshow=issue&year=1997&volume=31&volume_alt=&issue=2&issue_alt=&option_lang=eng
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Archive Impact factor Subscription License agreement Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Funktsional. Anal. i Prilozhen.: Year: Volume: Issue: Page: Find Boundary Conditions for Integrable LatticesV. E. Adler, I. T. Habibullin 1 Exponential Analytic SetsB. Ya. Kazarnovskii 15 Prevalence in the Space of Finitely Smooth MapsV. Y. Kaloshin 27 Many-Dimensional Generalization of the Il'yashenko Theorem on Abelian IntegralsI. A. Pushkar' 34 Polymorphisms, Joinings, and the Tensor Simplicity of Dynamical SystemsV. V. Ryzhikov 45 Functional Moduli of Jets of Riemannian MetricsA. S. Shmelev 58 Brief communications Factorization of Diffeomorphisms over Phase Portraits of Vector Fields on the PlaneR. I. Bogdanov 67 Solutions of the Yang Equation and Algebraic Curves of Genus $>1$V. I. Dragovich 70 Singularities of Circle-Surface Contacts and FlagsV. M. Zakalyukin 73 Permutation Groups Generated by a (2,2)-Cycle and the $n$-CycleYu. Yu. Kochetkov 76 A Many-Parameter Interpolation Functor and the Lorentz Space $L_{p\vec{q}}$, $\vec{q}=(q_1,…,q_n)$E. D. Nursultanov 79 Note on the Hilbert Polynomial of a Spherical VarietyA. Yu. Okounkov 82 Limit Behavior of the Spectrum in a Class of Large Random MatricesA. Yu. Plakhov 85 A Class of Infinite-Dimensional Weight $sl(3)$-ModulesS. A. Spirin 88 Normal Forms of the Whitney Umbrella with Respect to the Contact Group Preserving a ConeB. Z. Shapiro 91
2020-01-29 01:10:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40672245621681213, "perplexity": 2673.974233040382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251783621.89/warc/CC-MAIN-20200129010251-20200129040251-00267.warc.gz"}
https://annualreport.ifae.es/2017/scientific-activities/bsm/
Beyond the Standard Model Introduction The group consists of Profs. Alex Pomarol and Eduard Masso, the ICREA Research Professor Jose Ramon Espinosa, the SO postdoc Dr. Giuliano Panico and the IFAE researchers Dr. Oriol Pujolas and former ICREA Research Professor Mariano Quiros. The group activities are mainly in Beyond the Standard Model and Cosmology. Instability of the Higgs potential in the Standard Model J.R. Espinosa studied some of the physics associated to the instability of the Higgs potential in the Standard Model. The scale of this instability, determined as the Higgs field value at which the potential drops below the electroweak minimum, is about $10^{11}$ GeV. However, such a scale is unphysical as it is not gauge-invariant and suffers from a gauge-fixing uncertainty of up to two orders of magnitude. In a work with Mathias Garny (CERN), Thomas Konstandin (DESY) and Antonio Riotto (U. Geneva) it was shown how, by subjecting the SM to several probes of the instability (adding higher order operators to the potential; letting the vacuum decay through critical bubbles; heating up the system to very high temperature; inflating it) and asking in each case physical questions, one is able to provide several gauge-invariant scales related with the Higgs potential instability. In another work, in collaboration with Davide Racco and Antonio Riotto. Riotto (U. Geneva) it was shown that a cosmological signature of such instability could be dark matter in the form of primordial black holes (PBH) seeded by Higgs fluctuations during inflation. In such case, the existence of dark matter might not require physics beyond the Standard Model. The predicted spectrum of primordial black holes via this mechanism is shown in figure 1. Anomalous magnetic moment of the muon in extra dimensional theories M. Quiros, in collaboration with Dr. E. Megias (Max-Planck Institute) and L. Salas (IFAE), studied the experimental value of the anomalous magnetic moment of the muon, which point towards new physics coupled non-universally to muons and electrons. Working in extra dimensional theories, which solve the electroweak hierarchy problem with a warped metric, strongly deformed with respect to the AdS$_5$ geometry at the infra-red brane we have proven that extra physics has to be introduced to describe the anomalous magnetic moment of the muon. This job is done by a set of vector-like leptons, mixed with the physical muon through Yukawa interactions, and with a high degree of compositeness. The theory is consistent with all electroweak indirect, direct and theoretical constraints, the most sensitive ones being the modification of the $Z\mu\mu$ coupling, oblique observables and constraints on the stability of the electroweak minimum. They impose lower bounds on the compositeness and on the mass of the lightest vector-like lepton ($\gtrsim 270$ GeV). Vector-like leptons could be easily produced in Drell-Yan processes at the LHC and detected at $\sqrt{s}=13$ TeV. Using the same setup M. Quiros, in collaboration with Dr. E. Megias and L. Salas, explored the limits on lepton-flavor universality (LFU) violation in theories where the hierarchy problem is solved by means of a warped extra dimension. In those theories LFU violation, in fermion interaction with Kaluza-Klein modes of gauge bosons, is provided $ab$ $initio$ when different flavor of fermions are differently localized along the extra dimension. As this fact arises from the mass pattern of quarks and leptons, LFU violation is natural in this class of theories. We analyze the experimental data pointing towards LFU violation, as well as the most relevant electroweak and flavor observables, and the LFU tests in the $\mu/e$ and $\tau/\mu$ sectors. We find agreement with $R_{K^{(\ast)}}$ and $R_{D^{(\ast)}}$ data at 95% CL, provided the third generation left-handed fermions are composite ($0.14 < c_{b_L} < 0.28$ and $0.27 < c_{\tau_L} < 0.33$), and find the absolute limits $R_{K^{(\ast)}}\gtrsim 0.79$ and $R_{D^{(\ast)}}/R_{D^{(\ast)}}^{\rm SM}\lesssim 1.13$. Moreover we predict $\mathcal B( B\to K\nu\bar\nu)\gtrsim 1.14\times 10^{-5}$ at 95% CL, smaller than the present experimental upper bound but a few times larger than the Standard Model prediction. Bounds translate into an allowed region for the compositeness parameters of the left-handed bottom and tau as well as a region of the parameter $(V_{u_L}^*)_{32}/V_{cb}^{CKM}$ as it is shown in figure 2 Collider phenomenology and Beyond the Standard Model physics G. Panico has been working on different topics connected to collider phenomenology and Beyond the Standard Model physics. He followed three main research directions. One of them was focussed on the measurement of the Higgs trilinear self-coupling at the LHC and future colliders. In a series of two papers, he explored the sensitivity reach at the LHC and at future lepton colliders, showing how, through a global fit of the data, complementary information can be obtained from single- and double-Higgs production processes. A second research line was devoted to precision electroweak measurements at hadron colliders. He showed how an interplay between high-energy reach and a careful selection of clean processes can allow us to perform competitive measurements that can match, as in several cases surpass, the precision obtained at LEP. In three papers, I considered several processes, including di-lepton and di-boson production channels. In the third research line he followed, he investigated the implication of the stringent measurements on electron and neutron electric dipole moments for top partners in composite Higgs theories. Current bounds allow to set exclusions in the TeV mass range, while near-future improvements will push the reach well above the 10 TeV scale. Precision tests of the SM at the LHC A. Pomarol, has shown that precision tests of the SM at the LHC are possible by measuring differential cross-sections at high invariant mass, exploiting in this way the growth with the energy of the corrections induced by new physics. Together with his collaborators they have classified the leading growing-with-energy effects in longitudinal diboson and associated Higgs production processes, providing the reach on these effects at the LHC and future colliders. Their method will allow to test of the SM electroweak sector at the per-mille level, in competition with LEP bounds. They have studied strongly-coupled theories close to the conformal transition using holography, to understand the presence of light scalars as recent lattice simulations seem to suggest. With Yuri Gershtein, they have analyzed the plethora of new LHC results to asses their impact on extra-dimensional models, to be published in the “Review of Particle Physics” (2017 Edition).
2022-11-27 16:05:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6488078236579895, "perplexity": 1345.160380366659}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710409.16/warc/CC-MAIN-20221127141808-20221127171808-00477.warc.gz"}
https://discourse.julialang.org/t/simpleatsit5/90318
# SimpleATsit5? I’m just getting started with the DifferentialEquations universe. In the first paragraph after this heading in the documentation, it is stated that SimpleATsit5 is a fast, viable solver for small, non-stiff ODEs. I have a simple 2 \times 2 BVP so I would like to try it as a possible faster alternative to Tsit5. But when I try to use it e.g. in solve(bvp, Shooting(SimpleATsit5())) I get UndefVarError: SimpleATsit5 not defined Is there another way to access it? They are defined in GitHub - SciML/SimpleDiffEq.jl: Simple differential equation solvers in native Julia for scientific machine learning (SciML). The docs should arguably be updated. Thanks! using SimpleDiffEq did the trick. Surprisingly, SimpleATsit5 is 10 times slower for my problem than Tsit5. In any case, I submitted a PR to the documentation showing how to access SimpleATsit5.
2022-12-05 05:36:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7150697112083435, "perplexity": 2212.046085029222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711003.56/warc/CC-MAIN-20221205032447-20221205062447-00850.warc.gz"}
http://quant.stackexchange.com/tags/asset-pricing/new
# Tag Info 0 Well this is not my area of expertise but I have come across this sort of work before in Time Series Analysis/ Financial Econometrics. I don't know how much detail you want but from my understanding the author has written the two equation in State Space Form. I believe it is fairly common to write ARCH and GARCH models in this fashion. There are a lot of ... 1 The R code is correct. You could also use the I() operator. You can look here on page 53. The code then would be lm(stock~market+I(market^2)+I(market^3), data=example) EDIT: going more into detail: Doing the above you define regressors $market^2$ and $market^3$. The coefficients will be calculated the usual way (covariance of response with the regressors ... 0 To many statistical questions you can get frequentist and Bayesian answers which actually coincide. Such a subject is covariance matrix shrinkage and Bayesian regression. Have a look at the article "Honey, I Shrunk the Sample Covariance Matrix" from Lediot and Wolf. They introduce a transformation of the covariance matrix, so that the diagonals become more ... 0 I would recommend "Active Portfolio Management" from Richard Grinold and Ronald Kahn. The book builds up most theories used in portfolio composition with much detail. 0 Yes. If on the y axis you have excess returns, then the intercept of the line is zero. This are the implications of the CAPM model. E.g. for the SML: $E[R_i,t^e]=\beta \lambda_t$, where $R_i,t^e$ is the excess return on stock $i$ at time $t$ and $\lambda$ is the market price of risk. 1 As a simple example: if stock A went up a lot in 2014 and also went up a lot in 2015 it could be: (a) that Stock A is a high Beta stock and the market was up in both years. This is the cross sectional property of expected returns. Some stocks, in this case high beta stocks go up more than others when the market goes up. (b) Somehow the fact that Stock A went ... Top 50 recent answers are included
2015-07-31 15:32:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6644697189331055, "perplexity": 688.761684050313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988308.23/warc/CC-MAIN-20150728002308-00160-ip-10-236-191-2.ec2.internal.warc.gz"}
https://www.snapxam.com/solver?p=%5Clim_%7Bx%5Cto919191919991%7D%5Cleft%28%5Ccos%5Cleft%28%5Cfrac%7Bx%7D%7Bx%7D%5Cright%29%5Cright%29
# Step-by-step Solution Go! 1 2 3 4 5 6 7 8 9 0 a b c d f g m n u v w x y z . (◻) + - × ◻/◻ / ÷ 2 e π ln log log lim d/dx Dx |◻| = > < >= <= sin cos tan cot sec csc asin acos atan acot asec acsc sinh cosh tanh coth sech csch asinh acosh atanh acoth asech acsch ## Step-by-step Solution Problem to solve: $\lim_{x\to919191919991}\left(\cos\left(\frac{x}{x}\right)\right)$ Solving method Learn how to solve limits by direct substitution problems step by step online. $\lim_{x\to2147483647}\left(\cos\left(1\right)\right)$ Learn how to solve limits by direct substitution problems step by step online. Find the limit (x)->(2147483647)lim(cos((x/x))). Simplify the fraction \frac{x}{x} by x. The cosine of 1 equals 0.540302. The limit of a constant is just the constant. $0.540302$ $\lim_{x\to919191919991}\left(\cos\left(\frac{x}{x}\right)\right)$ ### Main topic: Limits by direct substitution ~ 0.02 s
2021-03-09 10:45:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.98126620054245, "perplexity": 2992.756327740822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389798.91/warc/CC-MAIN-20210309092230-20210309122230-00261.warc.gz"}
http://meta.stackexchange.com/questions/156622/what-did-happen-with-my-two-flags/156624
What did happen with my two flags? I'm not getting exactly what is happening here You can see I had flagged 8 comments from that 4 are helpful and 2 are declined, but I'm not able to find what is the status of my other 2 flags? Update Here I can see a waiting for review in flag history so with reference to hims056's comment can't we have same status for comments flag? - This is my comments stats –  hims056 Nov 24 '12 at 5:10 but here I can see a waiting for review in flag history? –  hotveryspicy Nov 24 '12 at 5:11 That is for quesion or answer's flag not comment's flag. –  hims056 Nov 24 '12 at 5:12 There was a time when the status was not recorded — that was a while ago now, but people who've been around for a while, like me on SO, have totals with a big gap (42 on flagged questions; 23 on flagged comments). It isn't a critical statistic; don't fret. –  Jonathan Leffler Nov 24 '12 at 7:45 Jarrod and I are moving where the flags are stored but it may take up to a few weeks as it's a major project - we'll be sure to look at this when we do so. –  Nick Craver Nov 24 '12 at 13:23 Comment flags, like comments, are second-class citizens. You are only shown statistics, not the actual comments or the status of flags still waiting for review. Your statistics show that you flagged 8 comments; 4 of those flags were deemed helpful, 2 were declined and the remaining 2 have either not yet been handled or they have been handled automatically, and thus do not have a status any longer. (Note that veteran SO comment flaggers have another option: the comment flag system didn't track comment flags at all until those stats were implementented, so those of us who have been around long may have flags that were either helpful or declined but were not counted in the helpful or declined counters. They are counted in the total flag count for comments: ). Since comment flags don't get you anything anyway (other than a warm fuzzy feeling that you have made the site a better place), I wouldn't worry about this too much in any case. - This means total flagged - (helpful+declined) = still to review , is it? moreover can you just have a look at update part? –  hotveryspicy Nov 24 '12 at 5:21 @hotveryspicy: That's answered by 'comment flags .. are second-class citizens'. That information is not given to us, so the answer, currently, is "no, we don't have that status information". –  Martijn Pieters Nov 24 '12 at 5:26 @hotveryspicy: And as animuson stated, automatically handled flags are also not marked as helpful or declined. They are just no longer waiting for review anymore. –  Martijn Pieters Nov 24 '12 at 5:27 There are some cases where comment flags get dismissed without a status being applied. Most notably, comment flags which are automatically handled (such as flagging an "accept rate" comment) do not get marked as helpful. This causes your total number of comments flagged to increase without increasing either of your helpful or declined comment flags. I'm sure a vast majority of users who participate in comment flagging don't have numbers which add up. -
2015-01-27 15:16:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4571138620376587, "perplexity": 2803.359331409887}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121981339.16/warc/CC-MAIN-20150124175301-00087-ip-10-180-212-252.ec2.internal.warc.gz"}
http://proofsfromthebook.com/2015/06/28/slope-of-perpendicular-lines/
# Slope of Perpendicular Lines Theorem If you have already learned about systems of linear equations, then you have probably discussed that the product of the slope of perpendicular lines is $latex -1$. The proof of this theorem comes from the fact that any point $latex (x,y)$ rotated 90 degrees at about the origin becomes $latex (-y,x)$. One example of this is shown below. The point $latex (3, 4)$, when rotated $latex 90$ degrees counterclockwise becomes $latex (-4,3)$. With this fact, we prove this theorem. Slope of Perpendicular Lines Theorem If two lines with slopes $latex m_1$ and $latex m_2$ are perpendicular, then $latex m_1 m_2 = -1$. Proof Let $latex P(x_1, y_1)$ and $latex Q(x_2, y_2)$ be points on line $latex PQ$ passing through the origin. If we rotate the points on the origin, then, the new coordinates of the points will be $latex P^\prime= (-y_1, x_1)$ and $latex Q^\prime(-y_2, x_2)$. If we let $latex m_1$ be the slope of $latex PQ$ and $latex m_2$ be the slope of $latex P^\prime Q^\prime$ , then $latex m_1 = \displaystyle \frac{y_2 – y_1}{x_2 – x_2}$ $latex m_2 = \displaystyle \frac{x_2 – x_1}{-y_2 – (-y_1)} = \frac{x_2 – x_1}{-(y_2 – y_1)} = -\frac{x_2 – x_1}{y_2 – y_1}$ Multiplying $latex m_1$ and $latex m_2$, we have $latex m_1 m_2 = \left ( \displaystyle \frac{y_2 – y_1}{x_2 – x_1} \right ) \left ( \displaystyle -\frac{x_2 – x_1}{y_2 – y_1} \right )$. $latex = – 1$ That proves the theorem above.
2022-07-03 01:57:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9295119643211365, "perplexity": 111.55447903811906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104209449.64/warc/CC-MAIN-20220703013155-20220703043155-00678.warc.gz"}
https://techwhiff.com/learn/consider-the-diagram-to-the-right-auli-ie-iowest/449639
# . Consider the diagram to the right auli ie iowest melting point? a) The triple point... ###### Question: . Consider the diagram to the right auli ie iowest melting point? a) The triple point occurs at b) The normal melting point is c) The normal boiling point is d) The normal sublimation point is 166,000 --r----.---- 900 760 330 e) What phase do we have at 550 torr 8 and 0.005 C? f) How many phase(s) are present at 2.15 2.15 torr and -10°C? What are they? g) As pressure is increased on gaseous water at 0.002°C, how many phase and/or phase mixtures will be encoun- tered? 78 100 374 -10 0 01 Temperature (°C) h) As the temperature is increased on solid water at 4.58 torr, how many phase and/or phase mixtures will be encountered? Name them. i) The critical point occurs at #### Similar Solved Questions ##### Revision Q1 Malaysia Budget 2014: 6% New Tax Effective April 2015, Sugar Subsidy Abolished Prime Minister... Revision Q1 Malaysia Budget 2014: 6% New Tax Effective April 2015, Sugar Subsidy Abolished Prime Minister Najib Razak unveiled a Malaysian Budget for 2014 on Friday that includes decisive changes like the new Goods and Services Tax (GST) and the elimination of the sugar subsidy. The budget aims broa... ##### Truck Components has an inventory of 500 obsolete remote entry keys that are carried in inventory... Truck Components has an inventory of 500 obsolete remote entry keys that are carried in inventory at a manufacturing cost of $78,500 Production supervisor Andy Johns must decide to do one of the following: • Process the inventory further at a cost of$10,000, with the expectation of sing it for... ##### Complications of Diabetes Mellitus: Priority Interventions for Diabetic Ketoacidosis (Active Learning Template - System Disorder, RM... Complications of Diabetes Mellitus: Priority Interventions for Diabetic Ketoacidosis (Active Learning Template - System Disorder, RM AMS RN 10.0 Chp 83) (Please answer every box!) System Disorder ACTIVE LEARNING TEMPLATE: STUDENT NAME DISORDER/DISEASE PROCESS REVIEW MODULE CHAPTER Health Promotion ... ##### A 46-year-old patient states that he has had weakness in his lower extremities that has worsened... A 46-year-old patient states that he has had weakness in his lower extremities that has worsened in an ascending manner over the past 3 days. The patient also states he is having difficulty breathing. You are most concerned that this patient is at high risk for respiratory failure caused by: a. ... ##### 020 (part 1 of2) A train is moving parallel and adjacent to a highway with a... 020 (part 1 of2) A train is moving parallel and adjacent to a highway with a constant speed of 32 m/s. A car is traveling in the same direction as the train at 64 m/s. The car?s horn sounds at 520 Hz and the train?s whistle sounds at 320 Hz. When the car is behind the train what frequency does an oc... ##### Evaluate following argument? Is it valid? Is it sound? Why? All American Presidents are well-read. Donald... Evaluate following argument? Is it valid? Is it sound? Why? All American Presidents are well-read. Donald Trump is an American President. Therefore, Donald Trump is well-read.... ##### True or False At least 95% of all animals are invertebrates. True or False At least 95% of all animals are invertebrates.... ##### Pls give your answer in paragraphs with appropriate examples where necessary. And make sure to give... Pls give your answer in paragraphs with appropriate examples where necessary. And make sure to give 400 words or more with probably some references. Why is the need for “school health”? What is “coordinated school health program”? And, what are “foundations of school he... ##### Need help. thank you need help. thank you... please we need answer for quetion NO 2 Please send the written answer on the computer and not by hand HAIER's foray into International Markets: In the late 1990s, the Haier group (Haier) was the leader in the Chinese consumer appliances market (with a 39.7%, 50% and 37.1% market share in ... ##### Nultiple choice Classify the following reaction as an oxidation reaction, a reduction reaction, or neither. Oxidation... nultiple choice Classify the following reaction as an oxidation reaction, a reduction reaction, or neither. Oxidation Reaction Reduction Reaction This is neither an oxidation reaction nor a reduction reaction. Click if you would like to Show Work for this questioni Open Show Work... ##### Please help me with both. TIA The following chemical reaction: A → products shows zero order... please help me with both. TIA The following chemical reaction: A → products shows zero order kinetics with respect to A; rate = k[A]º. if k = 9.65 x 10-3 moll's and the initial concentration of A is 0.622 mol L-1, what is the half life of this reaction in minutes? You have 5 attemp... ##### Please solve step by step 13.2 Electric Force. Coulomb's Law Coulomb's Lau Firi a F12= 41... Please solve step by step 13.2 Electric Force. Coulomb's Law Coulomb's Lau Firi a F12= 41 o 2f12-Note that Fi2 is the force on l by 2; and 12-71-72 points from 2 to 1.fi 7 is the unit vector in the direction of In general for multiple di rete charges, net orce on charge 1, F (Pairwise ... ##### Give mechanism? CH CICH2CH20 catalyst 11-8, Scheme 10.11 PhCCO2CH3 cat 11-8 CICH2CH2O CO2CH3 75%, 98% e... Give mechanism? CH CICH2CH20 catalyst 11-8, Scheme 10.11 PhCCO2CH3 cat 11-8 CICH2CH2O CO2CH3 75%, 98% e e.... ##### Fargo Company's outstanding stock consists of 650 shares of noncumulative 5% preferred stock with a $10 par value and 4,200 shares of common stock with a$1 par value. During the first three years of operation, the corporation declared and paid the follow Fargo Company's outstanding stock consists of 650 shares of noncumulative 5% preferred stock with a $10 par value and 4,200 shares of common stock with a$1 par value. During the first three years of operation, the corporation declared and paid the following total cash dividends. Dividends ... Current Attempt in Progress Oriole Industries sells two electrical components with the following characteristics. Fixed costs for the company are $364,000 per year. Sales price Variable cost Sales volume XL-709$35 29 98,000 units CD-918 \$48 40 245,000 units (a) How many units of each product must O...
2022-10-05 04:53:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26583239436149597, "perplexity": 5985.10012234787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00094.warc.gz"}
https://www.mathway.com/examples/pre-algebra/simplifying-and-evaluating-expressions/simplifying?id=2
# Pre-Algebra Examples Pull terms out from under the radical, assuming positive real numbers. Remove parentheses around .
2018-08-16 02:33:07
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9747639894485474, "perplexity": 9591.191418053058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210408.15/warc/CC-MAIN-20180816015316-20180816035316-00124.warc.gz"}
https://www.mimuw.edu.pl/en/aktualnosci/seminaria/antiramsey-colorings-uncountable-squares-and-geometry-nonseparable-banach
## Wydział Matematyki, Informatyki i Mechaniki Uniwersytetu Warszawskiego • Skala szarości • Wysoki kontrast • Negatyw • Reset # Aktualności — Wydarzenia Topology and Set Theory Seminar ## On antiramsey colorings of uncountable squares and geometry of nonseparable Banach spaces Prelegent: Kamil Ryduchowski 2023-01-25 16:15 A subset Z of a Banach space X is said to be r-equilateral (r-separated) if every two distinct elements of Z are in the distance exactly (at least) r from each other. We will address the question of the existence of uncountable equilateral and (1 + e)-separated sets (e > 0) in the unit spheres of some nonseparable Banach spaces X induced by antiramsey colorings of pairs of countable ordinals. The corollaries are that non(M) = \omega_1 implies 1) the existence of an equivalent renorming of the Hilbert space of density \omega_1 which does not admit any uncountable equilateral set and 2) the existence of a nonseparable Hilbert generated Banach space containing an isomorphic copy of l_2 in each nonseparable subspace, whose unit sphere does not admit an uncountable equilateral set and does not admit an uncountable (1 + e)-separated set for any e > 0. It turns out that in our approach additional set-theoretic axioms are inevitable, since under (MA + \neg CH) the geometry of the spaces under consideration is regular. The talk is based on a joint work with Piotr Koszmider: https://arxiv.org/abs/2301.07413 This will be the last talk of this semester.
2023-02-07 18:07:17
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8147115111351013, "perplexity": 1614.9759602680547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500628.77/warc/CC-MAIN-20230207170138-20230207200138-00402.warc.gz"}
https://handwiki.org/wiki/Amplitude_damping_channel
# Amplitude damping channel In the theory of quantum communication, an amplitude damping channel is a quantum channel that models physical processes such as spontaneous emission. A natural process by which this channel can occur is a spin chain through which a number of spin states, coupled by a time independent Hamiltonian, can be used to send a quantum state from one location to another. The resulting quantum channel ends up being identical to an amplitude damping channel, for which the quantum capacity, the classical capacity and the entanglement assisted classical capacity of the quantum channel can be evaluated. ## Qubit Channel The amplitude-damping channel models energy relaxation from an excited state to the ground state. On a two-dimensional system, or qubit, with decay probability $\displaystyle{ \gamma }$, the channel's action on a density matrix $\displaystyle{ \rho }$ is given by $\displaystyle{ {\cal N}_\gamma(\rho) = K_0 \rho K_0^\dagger + K_1 \rho K_1^\dagger\;, }$ where $\displaystyle{ K_0, K_1 }$ are the Kraus operators given by $\displaystyle{ K_0 = \begin{pmatrix}1&0\\0&\sqrt{1-\gamma}\end{pmatrix}, \; K_1 = \begin{pmatrix}0&\sqrt{\gamma}\\0&0\end{pmatrix}\;. }$ Thus $\displaystyle{ {\cal N}_\gamma\left[\begin{pmatrix}\rho_{00}&\rho_{01}\\\rho_{10}&\rho_{11}\end{pmatrix}\right] = \begin{pmatrix}\rho_{00}+\gamma \rho_{11} & \sqrt{1-\gamma} \rho_{01} \\ \sqrt{1-\gamma} \rho_{10} & (1-\gamma) \rho_{11}\end{pmatrix}\;. }$ ## Model for a Spin Chain Quantum Channel The main construct of the quantum channel based on spin chain correlations is to have a collection of N coupled spins. At either side of the quantum channel, there are two groups of spins and we refer to these as quantum registers, A and B. A message is sent by having the sender of the message encode some information on register A, and then, after letting it propagate over some time t, having the receiver later retrieve it from B. The state $\displaystyle{ \rho_{A} }$ is prepared on A by first decoupling the spins on A from those on the remainder of the chain. After preparation, $\displaystyle{ \rho_{A} }$ is allowed to interact with the state on the remainder of the chain, which initially has the state $\displaystyle{ \sigma_{0} }$. The state of the spin chain as time progresses can be described by $\displaystyle{ R(t) = U(t)(\rho_{A} \otimes \sigma_{0})U^{\dagger}(t) }$. From this relationship we can obtain the state of the spins belonging to register B by tracing away all other states of the chain. $\displaystyle{ \rho_B(t)= \mbox{Tr}^{(B)} [ U(t) (\rho_A \otimes \sigma_0) U^{\dagger}(t)] }$ This gives the mapping below, which describes how the state on A is transformed as a function of time as it is transmitted over the quantum channel to B. U(t) is just some unitary matrix which describes the evolution of the system as a function of time. $\displaystyle{ \rho_A \rightarrow \mathcal{M}(\rho_A ) \equiv \rho_B(t)= \mbox{Tr}^{(B)} [ U(t) (\rho_A \otimes \sigma_0) U^{\dagger}(t)] }$ There are, however, a few issues with this description of the quantum channel. One of the assumptions involved with using such a channel is that we expect that the states of the chain are not disturbed. While it may be possible for a state to be encoded on A without disturbing the chain, a reading of the state from B will influence the states of the rest of the spin chain. Thus, any repeated manipulation of the registers A and B will have an unknown impact on the quantum channel. Given this fact, solving the capacities of this mapping would not be generally useful, since it will only apply when several copies of the chain are operating in parallel. In order to calculate meaningful values for these capacities, the simple model below allows for the capacities to be solved exactly. ### Solvable Model A spin chain, which is composed of a chain of particles with spin 1/2 coupled through a ferromagnetic Heisenberg interaction, is used, and is described by the Hamiltonian: $\displaystyle{ H=-\sum_{\langle i,j \rangle} \hbar J_{ij} \left({\sigma}_x^{i}{\sigma}_x^{j} +{\sigma}_y^{i}{\sigma}_y^{j}+\gamma {\sigma}_z^{i}{\sigma}_z^{j}\right)-\sum_{i=1}^{N} \hbar B_i \sigma_z^{i} }$ It is assumed that the input register, A and the output register B occupy the first k and last k spins along the chain, and that all spins along the chain are prepared to be in the spin down state in the z direction. The parties then use all k of their spin states to encode/decode a single qubit. The motivation for this method is that if all k spins were allowed to be used, we would have a k-qubit channel, which would be too complex to be completely analyzed. Clearly, a more effective channel would make use of all k spins, but by using this inefficient method, it is possible to look at the resulting maps analytically. To carry out the encoding of a single bit using the k available bits, a one-spin up vector is defined $\displaystyle{ |j \rangle }$, in which all spins are in the spin down state except for the j-th one, which is in the spin up state. $\displaystyle{ | { j}\rangle \equiv \left|\downarrow \downarrow \cdots \downarrow \uparrow \downarrow \cdots \downarrow \right\rangle }$ The sender prepares his set of k input spins as: $\displaystyle{ |\Psi\rangle_A \equiv \alpha \left|\Downarrow\right\rangle_A + \beta|\phi_1 \rangle_A }$ where $\displaystyle{ \left|\Downarrow\right\rangle }$ is the state where all positions have spin down, and $\displaystyle{ |\phi_1 \rangle }$ is the superposition of all possible one-spin up states. Using this input, it is possible to find a state which describes the whole chain at a given time t. From such a state, tracing out the N-k spins not belonging to the receiver, as we would have done with the earlier model, leaves the state on B: $\displaystyle{ \rho_B(t) = (|\alpha|^2 + (1-\eta) |\beta|^2) \left| \Downarrow \right\rangle_B\left\langle \Downarrow \right| + \eta |\beta|^2 |\phi_1^{\prime}\rangle_B \langle \phi_1^{\prime}|+ \sqrt{\eta} \alpha \beta^* \left| \Downarrow \right\rangle_B\langle \phi_1^{\prime} | + \sqrt{\eta} \alpha^* \beta | \phi_1^{\prime} \rangle_B\left\langle \Downarrow \right| }$ where $\displaystyle{ \eta }$ is a constant defining the efficiency of the channel. If we represent the states in which one spin is up to be $\displaystyle{ |1 \rangle }$ and those where all spins are down to be $\displaystyle{ | 0 \rangle }$, this becomes recognizable as the result of applying the amplitude damping channel $\displaystyle{ \mathcal{D}_n }$, characterized by the following Kraus operators: $\displaystyle{ A_0 = |0\rangle\langle 0| +\sqrt{\eta}|1\rangle \langle 1| }$; $\displaystyle{ A_1 = \sqrt{1-\eta}|0\rangle \langle 1| }$ Evidently, the fact that an amplitude damping channel describes the transmission of quantum states across the spin chain stems from the fact that Hamiltonian of the system conserves energy. While energy can be spread out as the one-spin up state is transferred along the chain, it is not possible for spins in the down state to suddenly gain energy and become spin up states. ## Capacities of the Amplitude Damping Channel By describing the spin-chain as an amplitude damping channel, it is possible to calculate the various capacities associated with the channel. One useful property of this channel, which is used to find these capacities, is the fact that two amplitude damping channels with efficiencies $\displaystyle{ \eta }$ and $\displaystyle{ \eta' }$ can be concatenated. Such a concatenation gives a new channel of efficiency $\displaystyle{ \eta }$$\displaystyle{ \eta' }$. ### Quantum Capacity In order to calculate the quantum capacity, the map $\displaystyle{ \mathcal{D}_\eta }$ is represented as follows: $\displaystyle{ \mathcal{D}_\eta (\rho) \equiv \mbox{Tr}_C [ V \left( \rho \otimes |0 \rangle_C \langle 0| \right) V^{\dagger}]\;. }$ This representation of the map is obtained by adding an auxiliary Hilbert space $\displaystyle{ \mathcal{H}_C }$ to that of $\displaystyle{ \mathcal{H}_A }$. and introducing an operator V which operates on A and C. A complementary channel, $\displaystyle{ \tilde{\mathcal{D}}_\eta }$ is also defined, where instead of tracing over C, we trace over A. A swapping operation S which transforms A into C is defined. Using this operation, as well as the rule for concatenation of amplitude damping channels, it is shown that for $\displaystyle{ \eta \geqslant 0.5 }$: $\displaystyle{ \tilde{\mathcal{D}}_\eta (\rho) = S \mathcal{D}_{(1-\eta)/\eta} \left({\mathcal{D}}_{\eta} (\rho)\right)\;. }$ This relationship demonstrates that the channel is degradable, which guarantees that the coherent information of the channel is additive. This implies that the quantum capacity is achieved for a single channel use. An amplitude damping mapping is applied to a general input state, and from this mapping, the von Neumann entropy of the output is found as: $\displaystyle{ S(\mathcal{D}_{\eta} (\rho)) = H_2 (\left(1 + \sqrt{(1- 2\,\eta\, p)^2 + 4\,\eta\, |\gamma|^2} \right)/2)\;, }$ where $\displaystyle{ p\in[0,1] }$ with state $\displaystyle{ |1 \rangle }$ and $\displaystyle{ |\gamma|\leqslant \sqrt{(1-p)p} }$ is a coherence term. By looking at a purification of the state, it is found that: $\displaystyle{ S((\mathcal{D}_{\eta} \otimes1_{anc}) (\Phi)) = H_2 (\left(1 + \sqrt{(1- 2\,(1-\eta)\, p)^2 + 4\,(1-\eta)\, |\gamma|^2} \right)/2) }$ In order to maximize the quantum capacity, we choose that $\displaystyle{ \gamma = 0 }$ (due to concavity of entropy, which yields the following as the quantum capacity: $\displaystyle{ Q \equiv \max_{p\in[0,1]} \; \Big\{ \; H_2 (\eta\, p) - H_2((1-\eta)\, p)\; \Big\}\; }$ Finding the quantum capacity for $\displaystyle{ \eta \lt 0.5 }$ is straightforward, as the quantum capacity vanishes as a direct result of the no-cloning theorem. The fact that channels can be composed in this fashion implies that quantum capacity of the channel must increase as a function of $\displaystyle{ \eta }$. ### Entanglement Assisted Classical Capacity To calculate the entanglement assisted capacity we must maximize the quantum mutual information. This is found by adding the input entropy of the message to the derived coherent information in the previous section. It is again maximized for $\displaystyle{ \gamma = 0 }$. Thus, the entanglement assisted classical capacity is found to be $\displaystyle{ C_E \equiv \max_{p\in[0,1]} \; \Big\{ \; H_2( p) + H_2 (\eta\, p) - H_2((1-\eta)\, p)\; \Big\}\; }$ ### Classical Capacity We now calculate C1, which is the maximum amount of classical information that can be transmitted by non-entangled encodings over parallel channel uses. This quantity acts as a lower bound for the classical capacity, C. To find C1, the classical capacity is maximized for n=1. We consider an ensemble of messages, each with probability $\displaystyle{ \xi_{k} }$. The Holevo information is found to be: $\displaystyle{ \chi \equiv H_2 \left(\frac{1 + \sqrt{(1- 2 \,\eta\,p)^2 +4 \,\eta\, |\gamma|^2}}{2} \right)-\sum_k \xi_k H_2 \left(\frac{1 + \sqrt{(1- 2 \,\eta\,p_k)^2 +4 \,\eta\, |\gamma_k|^2}}{2} \right)\; }$ In this expression, $\displaystyle{ p_k }$ and $\displaystyle{ \gamma_k }$ are the population and a coherence term, as defined before, and $\displaystyle{ p }$ and $\displaystyle{ \gamma }$ are the average values of these. In order to find C1, first an upper bound is found for C1, and then a set of $\displaystyle{ p_k,\gamma_k,\xi_k }$ are found that satisfy this bound. As before, $\displaystyle{ \gamma }$ is set to be 0 in order to maximize the first term of Holevo information. From here we use the fact that the binary entropy $\displaystyle{ H_2(z) }$ is decreasing with respect to $\displaystyle{ |1/2 + z| }$ as well as the fact that $\displaystyle{ H_2(1 + \sqrt{1-z^2}/2) }$ is convex with respect to z to find the following inequality: $\displaystyle{ \sum_k \xi_k H_2 \left(\frac{1 + \sqrt{(1- 2 \,\eta\,p_k)^2+4 \,\eta\, |\gamma_k|^2}}{2} \right) \geqslant H_2 \left(\frac{1 + \sqrt{1- 4 \,\eta\,(1-\eta) (\sum_k \xi_k p_k)^2}}{2} \right) }$ By maximizing over all choices of p, the following upper bound for C1 is found: $\displaystyle{ C_1 \leqslant \max_{p\in[0,1]} \Big\{ H_2 \left(\eta \, p \right)- H_2 \left(\frac{1 + \sqrt{1- 4 \,\eta\,(1-\eta) \,p ^2}}{2} \right) \Big\} \; }$ This upper bound is found to be the value for C1, and the parameters that realize this bound are $\displaystyle{ \xi_k=1/d \,\! }$,$\displaystyle{ p_k=p \,\! }$, and $\displaystyle{ \gamma_k=e^{2\pi i k/d} \sqrt{(1-p)p} }$. ### Numerical Analysis of the Capacities From the expressions for the various capacities, it is possible to carry out a numerical analysis on them. For an $\displaystyle{ \eta }$ of 1, the three capacities are maximized, which leads to the quantum and classical capacities both being 1, and the Entanglement assisted classical capacity being 2. As mentioned earlier, the quantum capacity is 0 for any $\displaystyle{ \eta }$ less than 0.5, while the classical capacity and the entanglement assisted classical capacity reach 0 for $\displaystyle{ \eta }$ of 0. When $\displaystyle{ \eta }$ is less than 0.5, too much information is lost to the environment for quantum information to be sent to the receiving party. ## Effectiveness of Spin-Chains as a Quantum Communication Channel Having calculated the capacities for the amplitude damping channel as a function of the efficiency of the channel, it is possible to analyze the effectiveness of such a channel as a function of distance between the encoding site and the decoding site. Bose demonstrated that the efficiency drops as a function of $\displaystyle{ |r-s|^{-2/3} }$, where r is the position of the decoding and s is the position of encoding. Due to the fact that the quantum capacity vanishes for $\displaystyle{ \eta }$ less than 0.5, this means that the distance between the sender and the receiver must be very short in order for any quantum information to be transmitted. Therefore, long spin chains are not suitable to transmit quantum information. ## Future Study Possibilities for future study in this field would include methods whereby spin-chain interactions could be used as a more effective channel. This would include the optimization of the values of $\displaystyle{ \eta }$ by looking more closely at the interaction between the spins, and choosing interactions which have a positive effect on the efficiency. Such an optimization could allow for more effective transmission of quantum data over distance. An alternative to this would be to split the chain into smaller segments, and to use a large number of spin chains to transmit quantum data. This would be effective since the spin chains are themselves good at transmitting quantum data short distances. On top of this, it would be possible to increase the quantum capacity by allowing for free two way classical communication between the sender and receiver and making use of quantum effects such as quantum teleportation. Other areas of study would include an analysis for an encoding that makes use of the full k spins of the registers, as this would allow for more information to be communicated at a time.
2023-02-07 15:08:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8682424426078796, "perplexity": 1103.2293942160532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500619.96/warc/CC-MAIN-20230207134453-20230207164453-00693.warc.gz"}
https://stats.stackexchange.com/questions/152461/how-to-generate-uncorrelated-white-noise-sequence-in-r-without-using-arima-sim
# How to generate uncorrelated white noise sequence in R without using arima.sim? I want to know how to generate uncorrelated white noise sequence $WN(0,\sigma^2)$ in R **without using ** arima.sim(list(order=c(0,0,0)),200) ? The reason I post this in here instead of stackoverflow is because I feel like this requires understanding the mathematical structure of a white noise such that we can build a program about it. If viewers feel that this question really belong to stackoverflow, please do not downvote this question. Just let me know, then I will migrate it to stackoverflow. • just a few seconds quicker... – Christoph Hanck May 15 '15 at 10:24 • @ChristophHanck Yours has the virtue of being an answer (and +1 for it). I couldn't see how to make it one, but you made several good additional points there. – Glen_b May 15 '15 at 10:25 • mynamesJEFF -- I think it should survive on the basis that it requires statistical expertise to answer (just as you suggest) – Glen_b May 15 '15 at 10:27 You will have to specify some distribution, but if you are happy to go with the default choice of a normal distribution (as, in fact, does arima.sim, unless you override the default with some other choice of its rand.gen argument), then rnorm(200) will do the trick: it yields a series of uncorrelated (in fact, even independent) and identically distributed r.v.s. White noise is simply a sequence of i.i.d random variables. Due to that, you could just use: rnorm(n, mean = 0, sd = 1) To give you a little more insight, here's how I would use it to generate a random walk: set.seed(15) x=NULL x[1]=0 for (i in 2:100) { x[i] = x[i-1] + rnorm(1,0,1) } ts.plot(x, main = 'Random walk 1(Xt)', xlab = 'Time', ylab = '', col='blue', lwd = 2) • This is the same solution posted over three years ago. If your objective is to show good R code for generating the random walk, then please consider using cumsum: it's built-in, clearer, and more efficient. An example is ts.plot(cumsum(rnorm(100))). – whuber Jun 23 '18 at 22:57
2019-08-26 03:35:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.641146719455719, "perplexity": 801.5006547571011}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330962.67/warc/CC-MAIN-20190826022215-20190826044215-00394.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/the-integrated-rate-equation-first-order-reaction-products-integrated-rate-equations-half-life-reaction_484
# The Integrated Rate Equation for First Order Reaction is a → Products - Chemistry MCQ The integrated rate equation for first order reaction is A → products #### Options • k=2.303t log_10  [A]_@/[A]_t • k=-1/tl_n[A]_t/[A]_0 • vk=2.303/t log_10  [A]_t/[A]_0 •  k=1/tl_n[A]_t/[A]_0 #### Solution k=-1/tl_n[A]_t/[A]_0 Concept: Integrated Rate Equations - Half-life of a Reaction Is there an error in this question or solution?
2021-10-24 02:20:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7735007405281067, "perplexity": 9331.52885636848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585837.82/warc/CC-MAIN-20211024015104-20211024045104-00498.warc.gz"}
https://amathew.wordpress.com/2011/12/03/equivariant-k-theory/
The semester here is now over (save for final exams), which means that I hope to start posting on this blog more frequently again. One of my goals for the next couple of months is to understand the proof of the Atiyah-Singer index theorem. I’m pretty far from that point right now, so I’ll start with the foundations; this will have the additional effect of forcing me to engage more deeply with the basic stuff. (It’s all too easy for students — and I seem to be especially prone to this — to get flaky and learn mathematical terms without actually gaining understanding!) Ideally, I’m hoping to repeat a MaBloWriMo-type project. To start with, I’ve been reading Segal’s paper “Equivariant K-theory.” This post will cover some of the basic ideas in this paper. Classically, topological K-theory starts by assigning to any finite CW complex ${X}$ the category of vector bundles on ${X}$. This is an additive category, where all exact sequences split; one can thus define the Grothendieck group ${K(X)}$ of this category. This is obtained by taking the free abelian group on all symbols ${[E]}$ for ${E}$ a vector bundle on ${X}$, and quotienting by the relations ${[E] = [E'] + [E'']}$ whenever there is an isomorphism ${E \simeq E' \oplus E''}$. In other words, K-theory is the “group completion” of the monoid of isomorphism classes of vector bundles on ${X}$. Moreover, ${K(X)}$ becomes a ring because one has a tensor product operation on the category of vector bundles, which clearly commutes with direct sums. 1. Definitions The point of Segal’s paper is to generalize this to the equivariant setting. Namely, let ${G}$ be a compact Lie group. A ${G}$-space will be a space ${X}$ with a continuous ${G}$-action ${X \times G \rightarrow X}$. Usually, we will want ${X}$ to be compact. So let’s assume this for the present post. Definition 1 A ${G}$-vector bundle on ${X}$ is a vector bundle ${p: E \rightarrow X}$ where ${E}$ has the structure of a ${G}$-space such that ${p}$ is equivariant. Moreover, we require that the maps on fibers ${E_x \rightarrow E_{gx}}$ for ${x \in X, g \in G}$ be morphisms of vector spaces (i.e. linear). Clearly, there is a category of ${G}$-vector bundles on ${X}$ and ${G}$-equivariant maps; an equivariant map between ${G}$-vector bundles is simply a map of vector bundles which is ${G}$-equivariant. The basic example is when ${X}$ is acted upon trivially by ${G}$. In that case, we can think of a ${G}$-vector bundle as a continuously varying family of ${G}$-representations, as all the fibers ${E_x, x \in X}$ will be equipped with an action of ${G}$. An example of how we can get such equivariant vector bundles is to start with an ordinary vector bundle ${F \rightarrow X}$, and take the tensor power ${F^{\otimes k} \rightarrow X}$. This is canonically endowed with the structure of an ${S_k}$-vector bundle if ${X}$ is given the trivial ${S_k}$-action. Another useful example is when ${X = G/H}$ for a closed subgroup ${H \subset G}$. In this case, we claim: Proposition 2 There is an equivalence of categories between ${G}$-vector bundles on ${X = G/H}$ and finite-dimensional ${H}$-representations. Proof: To see the idea, note that if ${E \rightarrow G/H}$ is a ${G}$-vector bundle, then the fiber over the distinguished element ${\ast \in G/H}$ (the image of the identity in ${G}$) is acted on by ${H}$, and becomes an ${H}$-representation. So there is a functor from equivariant (${G}$-) vector bundles on ${G/H}$ to ${H}$-representations given by sending ${E \rightarrow G/H}$ to ${E_{\ast}}$. One has to check that this is an equivalence of categories, but this is because one can easily check that any vector bundle over ${G/H}$ is given of the form ${E_{\ast} \times_H G}$ (where the product over ${H}$ denotes a natural identification). I won’t spell out all the details. $\Box$ The category of equivariant vector bundles on a ${G}$-space ${X}$ naturally forms an additive category. As before, we can define: Definition 3 The equivariant K-group ${K_G(X)}$ of ${X}$ is the Grothendieck group of equivariant vector bundles on ${X}$. One might object that the Grothendieck group is sometimes defined in terms of short exact sequences, and sometimes in terms of split exact sequences. We should probably check that an exact sequence always splits. Proposition 4 An exact sequence of ${G}$-vector bundles on ${X}$ given by ${0 \rightarrow E' \rightarrow E \rightarrow E'' \rightarrow 0}$ (i.e. such that the fibers are all exact) necessarily splits. Proof: To see this, we have to show that ${E' \rightarrow E }$ is a split injection. One way to see this is to choose a complement ${F'' \subset E}$ to ${E'}$; this will then map isomorphically to ${E''}$ and we will get an isomorphism ${E \simeq E' \oplus F'' \simeq E' \oplus E''}$. In the nonequivariant case, one would do this by choosing a continuously varying family of hermitian metrics on the fibers: that is, a hermitian metric on ${E}$. Then we could take ${F''}$ to be the orthogonal complement to ${E'}$ in ${E}$. Naturally this will not work here, because we won’t necessarily get an equivariant subbundle. But we can do this if we choose an equivariant hermitian metric. This we can do by taking any hermitian metric (which is a section of ${E \otimes \overline{E}}$), and averaging under the ${G}$-action to get an equivariant section, which will be an equivariant hermitian metric. $\Box$ The group ${K_G(X)}$ is a contravariant functor in ${X}$ under equivariant maps of spaces ${f: Y \rightarrow X}$, because for such a map there is an additive functor ${f^*}$ from equivariant bundles on ${Y}$ to equivariant bundles on ${X}$. 2. The averaging trick So far, nothing here is non-standard from ordinary K-theory: the main clever trick that cropped up was the idea of averaging a nonequivariant section to get an equivariant one. We had to check (and I technically didn’t, but it’s pretty easy) that averaging a hermitian metric would give a hermitian one. Maybe this is worth spelling out more. Let ${E}$ be an equivariant vector bundle over the ${G}$-space ${X}$; then we let ${\Gamma(E)}$ be the vector space of all sections ${X \rightarrow E}$, and we let ${\Gamma^G(E)}$ be the subspace of equivariant sections. Note that ${\Gamma(E)}$ has a ${G}$-action, while ${\Gamma^G(E)}$ is just a vector space (or it has the trivial action). Now, ${\Gamma(E)}$ is actually a topological vector space, even a Banach space. Given a section ${s \in \Gamma(E)}$, we can define, using the normalized Haar measure on ${G}$, $\displaystyle s^G = \int_G s \circ g dg \in \Gamma^G(E);$ this map ${\Gamma(E) \rightarrow \Gamma^G(E)}$ is just the projection onto the subspace ${\Gamma^G(E)}$ of fixed points of the ${G}$-action. Here we genuinely use the compactness of ${G}$ to use the existence of an invariant measure on ${G}$ with total mass one. Let’s use this argument to prove a fact which is standard in ordinary K-theory, but requires a little extra work in the equivariant case. Namely, we can prove the homotopy invariance of equivariant K-theory: Proposition 5 Let ${f, g: X \rightarrow Y}$ be equivariantly homotopic equivariant maps of ${G}$-spaces. Then ${f^* = g^* }$ as maps ${K_G(Y) \rightarrow K_G(X)}$. Proof: As usual, we can reduce to the case of the two imbeddings ${X \rightrightarrows X \times [0, 1]}$. In this case, we essentially have to show the following. If ${E \rightarrow X \times [0, 1]}$ is a vector bundle on the product ${X \times [0, 1]}$, then ${E|_{X \times \left\{1\right\}}}$ and ${E|_{X \times \left\{0\right\}}}$ are isomorphic. In fact, we will show that the isomorphism class of ${E|_{X \times \left\{t\right\}}}$ (considered as an equivariant bundle on ${X}$) is locally constant in ${t}$, which is enough. To do this, we just need to show that ${E|_{X \times \left\{t\right\}}}$ is isomorphic to ${E|_{X \times \left\{0\right\}}}$ for ${t \simeq 0}$. To get this, we can consider the vector bundle ${F}$ on ${X \times [0, 1]}$ given by ${F= p^* i_0^* E}$ where ${p}$ is the projection ${X \times [0, 1] \rightarrow X}$ and ${i_0: X \hookrightarrow X \times [0, 1]}$ is the inclusion. In other words, ${F}$ is rigged such that ${F|_{X \times \left\{t\right\}} \simeq E|_{X \times \left\{0\right\}}}$ for all ${t}$. Now ${F}$ and ${E}$ are isomorphic when restricted to ${X \times \left\{0\right\}}$. So there is an equivariant section of ${\hom(F, E)|_{X \times \left\{0\right\}}}$ which is an isomorphism on each fiber. The claim is that we can extend it to an equivariant section in some neighborhood of ${X \times \left\{0\right\}}$ which will have to be an isomorphism in some neighborhood of the form ${X \times [0, \epsilon)}$ for ${\epsilon > 0}$; this is what we want. But, well, using standard techniques, we can extend it to a section in some ${G}$-invariant neighborhood (e.g. something of the form ${X \times [0, \epsilon')}$ for ${\epsilon > 0}$), and then by averaging we can make the section equivariant. $\Box$ 3. K-theory Anyway, with these preliminaries established, we can now say that we have defined a functor ${K_G(\cdot)}$ from compact ${G}$-spaces to abelian groups, which descends to the homotopy category of equivariant spaces. It is easy to see from the definition as a Grothendieck group that, as in the nonequivariant case, ${K_G(X)}$ is actually a commutative ring. We can interpret some of the earlier examples of ${G}$-vector bundles in terms of K-theory. For instance, we have seen that there is an equivalence of categories between equivariant vector bundles on ${G/H}$ and ${H}$-representations; thus, ${K_G(G/H)}$ is the representation ring ${R(H)}$. When ${H = G}$, then this is saying that ${K_G(\ast)}$ is the representation ring ${R(G)}$; of course, an equivariant vector bundle on ${\ast}$ is just a ${G}$-representation. In general, there are many interesting relations between equivariant K-theory and ordinary K-theory. To start with, as there is a forgetful functor from equivariant vector bundles to vector bundles, there is a natural ring-homomorphism $\displaystyle K_G(X) \rightarrow K(X)$ and more generally, whenever ${H \rightarrow G}$ is a morphism of compact Lie groups, there is a map $\displaystyle K_G(X) \rightarrow K_H(X).$ More interesting is the relation between the equivariant K-theory of a space and the K-theory of the quotient. Namely, let ${X}$ be a ${G}$-space, and consider ${X/G}$ as an ordinary space. There is a map $\displaystyle X \rightarrow X/G$ which enables one to pull back a vector bundle on ${X/G}$ to an ordinary vector bundle to ${X}$. In fact, the functor factors through the category of equivariant bundles on ${X}$; that is, there is a natural map $\displaystyle K(X/G) \rightarrow K_G(X)$ because the pull-back of a bundle on ${X/G}$ to ${X}$ automatically acquires an equivariant structure by general nonsense. This map is in general very far from being an isomorphism, for instance if ${X = \ast}$! However, we have: Proposition 6 The map ${K(X/G) \rightarrow K_G(X)}$ is an isomorphism if ${G}$ acts freely on ${X}$. Proof: In fact, we need to define an inverse map sending equivariant vector bundles ${P \rightarrow X}$ to vector bundles over ${X/G}$. Here we can send ${P \rightarrow X}$ to ${P/G \rightarrow X/G}$. Since all the identifications made in ${P/G}$ are made between distinct fibers (by freeness of the action), this defines a vector bundle on ${X/G}$, and this is the inverse to the above construction. I’m skipping some details. $\Box$ In general, the above proposition suggests that the failure of ${K(X) \rightarrow K_G(X)}$ to be an isomorphism might be rectified if instead of taking quotients ${X/G}$, one took “homotopy quotients” ${X \times EG/G}$ where ${EG }$ is the total space for the universal principal ${G}$-bundle. This is in fact (essentially) the case, and is the content of the Atiyah-Segal completion theorem. One has to be careful, because taking homotopy quotients does not preserve compactness. Anyway, so far I’ve only covered the material in the first couple of pages; there’s much more to say on this topic in the future.
2023-03-25 06:59:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 181, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9726834297180176, "perplexity": 107.44776642269905}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945317.85/warc/CC-MAIN-20230325064253-20230325094253-00055.warc.gz"}
http://www2.surrey.ac.uk/maths/people/turner_matthew/index.htm
# Dr Matthew Turner ## Lecturer Qualifications: MMath (UEA), PhD (UEA) Email: Phone: Work: 01483 68 6183 Room no: 26A AA 04 ## Biography I obtained an MMath degree at the UEA in 2002 before completing my PhD in Fluid Dynamics in 2006, also at the UEA, under the supervision of Dr Paul Hammerton. After this, I spent 3 years at the University of Exeter working with Prof. Andrew Gilbert, before moving to the Sir Harry Ricardo Laboratories at the University of Brighton, where I worked with Prof. Sergei Sazhin, Prof. Jonathan Healey (Keele), Dr Renzo Piazzesi (ANSYS UK Ltd) and Dr Cyril Crua. I started at Surrey in May 2011 and I'm currently enjoying fruitful research collaborations with Prof. Tom Bridges and Dr Hamid Alemi Ardakani on sloshing and dynamic coupling of fluid systems, and Dr Gianne Derks and Dr Ruan Elliott on modelling the uptake of iron in the human body. ## Research Interests My research interests lie in the field of fluid dynamics, in particular: • Boundary layer receptivity • Vortex dynamics • Jet stability and breakup • General flow stability • Sloshing fluids and dynamical coupling For more information on my research interests, see the research page of my personal web site. ## Publications For more information on my publications, including draft copies of papers, see the publications page of my personal web page. ### Journal articles • . (2013) 'Dynamic coupling in Cooker's sloshing experiment with baffles.'. American Institute of Physics Phys Fluids, 25 Article number 112102 #### Abstract This paper investigates the dynamic coupling between fluid sloshing and the motion of the vessel containing the fluid, for the case when the vessel is partitioned using non-porous baffles. The vessel is modelled using Cooker's sloshing configuration [M. J. Cooker, “Water waves in a suspended container,” Wave Motion20, 385–395 (1994)]. Cooker's configuration is extended to include n − 1 non-porous baffles which divide the vessel into n separate fluid compartments each with a characteristic length scale. The problem is analysed for arbitrary fill depth in each compartment, and it is found that a multitude of resonance situations can occur in the system, from 1 : 1 resonances to (n + 1)−fold 1 : 1: ⋯ : 1 resonances, as well as ℓ: m: ⋯ : n for natural numbers ℓ, m, n, depending upon the system parameter values. The conventional wisdom is that the principle role of baffles is to damp the fluid motion. Our results show that in fact without special consideration, the baffles can lead to enhancement of the fluid motion through resonance. • . (2013) 'Nonlinear energy transfer between fluid sloshing and vessel motion'. Journal of Fluid Mechanics, 719, pp. 606-636. #### Abstract This paper examines the dynamic coupling between a sloshing fluid and the motion of the vessel containing the fluid. A mechanism is identified which leads to an energy exchange between the vessel dynamics and fluid motion. It is based on a 1:1 resonance in the linearized equations, but nonlinearity is essential for the energy transfer. For definiteness, the theory is developed for Cooker's pendulous sloshing experiment. The vessel has a rectangular cross section, is partially filled with a fluid, and is suspended by two cables. A nonlinear normal form is derived close to an internal 1:1 resonance, with the energy transfer manifested by a heteroclinic connection which connects the purely symmetric sloshing modes to the purely anti-symmetric sloshing modes. Parameter values where this pure energy transfer occurs are identified. In practice, this energy transfer can lead to sloshing-induced destabilization of fluid-carrying vessels. • . (2012) 'A breakup model for transient Diesel fuel sprays'. Fuel, 97, pp. 288-305. #### Abstract In this paper a breakup model for analysing the evolution of transient fuel sprays characterised by a coherent liquid core emerging from the injection nozzle, throughout the injection process, is proposed. The coherent liquid core is modelled as a liquid jet and a breakup model is formulated. The spray breakup is described using a composite model that separately addresses the disintegration of the liquid core into droplets and their further aerodynamic breakup. The jet breakup model uses the results of hydrodynamic stability theory to define the breakup length of the jet, and downstream of this point, the spray breakup process is modelled for droplets only. The composite breakup model is incorporated into the KIVA II Computational Fluid Dynamics (CFD) code and its results are compared with existing breakup models, including the classic WAVE model and a previously developed composite WAVE model (modified WAVE model) and in-house experimental observations of transient Diesel fuel sprays. The hydrodynamic stability results used in both the jet breakup model and the WAVE droplet breakup model are also investigated. A new velocity profile is considered for these models which consists of a jet with a linear shear layer in the gas phase surrounding the liquid core to model the effect of a viscous gas on the breakup process. This velocity profile changes the driving instability mechanism of the jet from a surface tension driven instability for the currently used plug flow jet with no shear layers, to an instability driven by the thickness of the shear layer. In particular, it is shown that appreciation of the shear layer instability mechanism in the composite model allows larger droplets to be predicted at jet breakup, and gives droplet sizes which are more consistent with the experimental observations. The inclusion of the shear layer into the jet velocity profile is supported by previous experimental studies, and further extends the inviscid flow theory used in the formulation of the classic WAVE breakup model. © 2012 Elsevier Ltd. All rights reserved. • . (2012) 'Tollmien-Schlichting wave amplitudes on a semi-infinite flat plate and a parabolic body: comparison of a parabolized stability equation method and direct numerical simulations'. Oxford University Press Quarterly Journal of Mechanics & Applied Maths, 65 (2), pp. 183-210. #### Abstract In this paper, the interaction of free-stream acoustic waves with the leading edge of an aerodynamic body is investigated and two different methods for analysing this interaction are considered. Results are compared for a method which incorporates Orr–Sommerfeld calculations using the parabolized stability equation to those of direct numerical simulations. By comparing the streamwise amplitude of the Tollmien–Schlichting wave, it is found that non-modal components of the boundary layer response to an acoustic wave can persist some distance downstream of the lower branch. The effect of nose curvature on the persisting non-modal eigenmodes is also considered, with a larger nose radius allowing the non-modal eigenmodes to persist farther downstream. • . (2012) 'Wave packet analysis and break--up length calculations for accelerating planar liquid jets.'. Institute of Physics Fluid Dyn Res, 44 (1) Article number 015503 #### Abstract • . (2012) 'Resonance in a model for Cooker's sloshing experiment'. Elsevier European Journal of Mechanics B - Fluids, 36 (November-December), pp. 25-38. #### Abstract Cooker's sloshing experiment is a prototype for studying the dynamic coupling between fluid sloshing and vessel motion. It involves a container, partially filled with fluid, suspended by two cables and constrained to remain horizontal while undergoing a pendulum-like motion. In this paper the fully-nonlinear equations are taken as a starting point, including a new derivation of the coupled equation for vessel motion, which is a forced nonlinear pendulum equation. The equations are then linearized and the natural frequencies studied. The coupling leads to a highly nonlinear transcendental characteristic equation for the frequencies. Two derivations of the characteristic equation are given, one based on a cosine expansion and the other based on a class of vertical eigenfunctions. These two characteristic equations are compared with previous results in the literature. Although the two derivations lead to dramatically different forms for the characteristic equation, we prove that they are equivalent. The most important observation is the discovery of an internal $1:1$ resonance in the fully two-dimensional finite depth model, where symmetric fluid modes are coupled to the vessel motion. Numerical evaluation of the resonant and nonresonant modes are presented. The implications of the resonance for the fluid dynamics, and for the nonlinear coupled dynamics near the resonance are also briefly discussed. • . (2011) 'A study of mixing in coherent vortices using braiding factors'. Institute of Physics Fluid Dynamics Research, 43 (3) Article number 035501 #### Abstract This paper studies the use of braiding fluid particles to quantify the amount of mixing within a fluid flow. We analyze the pros and cons of braid methods by considering the motion of three or more fluid particles in a coherent vortex structure. The relative motions of the particles, as seen in a space–time diagram, produce a braid pattern, which is correlated with mixing and measured by the braiding factor. The flow we consider is a Gaussian vortex within a rotating strain field that generates cat's eyes in the vortex. We also consider a modified version of this strain field that contains a resonance frequency effect that produces multiple sets of cat's eyes at different radii. As the thickness of the cat's eyes increases, they interact with one another and produce complex Lagrangian motion in the flow that increases the braiding of particles, hence implying more mixing within the vortex. It is found that calculating the braiding factor using only three fluid particles gives useful information about the flow, but only if all three particles lie in the same region of the flow, i.e. this gives good local information. We find that we only require one of the three particles to trace a chaotic path to give an exponentially growing braiding factor. i.e. a non-zero 'braiding exponent'. A modified braiding exponent is also introduced which removes the spurious effects caused by the rotation of the fluid. This analysis is extended to a more global approach by using multiple fluid particles that span larger regions of the fluid. Using these global results, we compare the braiding within a viscously spreading Gaussian vortex in the above strain fields, where the flow is determined both kinematically and dynamically. We show that the dynamic feedback of the strain field onto the flow field reduces the overall amount of braiding of the fluid particles. • . (2011) 'Stability analysis and breakup length calculations for steady planar liquid jets'. CAMBRIDGE UNIV PRESS JOURNAL OF FLUID MECHANICS, 668, pp. 384-411. • . (2009) 'The influence of periodic islands in the flow on a scalar tracer in the presence of a steady source'. American Institute of Physics Physics of Fluids, 21 (6) #### Abstract In this paper we examine the influence of periodic islands within a time periodic chaotic flow on the evolution of a scalar tracer. The passive scalar tracer is injected into the flow field by means of a steady source term. We examine the distribution of the tracer once a periodic state is reached, in which the rate of injected scalar balances advection and diffusion with the molecular diffusion kappa. We study the two-dimensional velocity field u(x, y, t) = 2 cos(2)(omega t)(0, sin x) + 2 sin(2)(omega t)(sin y, 0). As omega is reduced from an O(1) value the flow alternates through a sequence of states which are either globally chaotic, or contain islands embedded in a chaotic sea. The evolution of the scalar is examined numerically using a semi-Lagrangian advection scheme. By time-averaging diagnostics measured from the scalar field we find that the time-averaged lengths of the scalar contours in the chaotic region grow like kappa(-1/2) for small kappa, for all values of omega, while the behavior of the time-averaged maximum scalar value, (C-max) over bar, for small kappa depends strongly on omega. In the presence of islands (C-max) over bar similar to kappa(-alpha) for some alpha between 0 and 1 and with kappa small, and we demonstrate that there is a correlation between alpha and the area of the periodic islands, at least for large omega. The limit of small omega is studied by considering a flow field that switches from u=(0, 2 sin x) to u=(2 sin y, 0) at periodic intervals. The small kappa limit for this flow is examined using the method of matched asymptotic expansions. Finally the role of islands in the flow is investigated by considering the time-averaged effective diffusion of the scalar field. This diagnostic can distinguish between regions where the scalar is well mixed and regions where the scalar builds up. c 2009 American Institute of Physics. [DOI: 10.1063/1.3159615] • . (2009) 'Diffusion and the formation of vorticity staircases in randomly strained two-dimensional vortices'. CAMBRIDGE UNIV PRESS JOURNAL OF FLUID MECHANICS, 638, pp. 49-72. • . (2009) 'Spreading of two-dimensional axisymmetric vortices exposed to a rotating strain field'. CAMBRIDGE UNIVERSITY PRESS Journal of Fluid Mechanics, 630, pp. 155-177. #### Abstract • . (2009) 'Analysis of the unstable Tollmien-Schlichting mode on bodies with a rounded leading edge using the parabolized stability equation'. CAMBRIDGE UNIV PRESS JOURNAL OF FLUID MECHANICS, 623, pp. 167-185. • . (2008) 'Effective diffusion of scalar fields in a chaotic flow'. AMER INST PHYSICS PHYSICS OF FLUIDS, 20 (10) Article number ARTN 107103 • . (2008) 'Thresholds for the formation of satellites in two-dimensional vortices'. CAMBRIDGE UNIV PRESS JOURNAL OF FLUID MECHANICS, 614, pp. 381-405. • . (2008) 'Neutral modes of a two-dimensional vortex and their link to persistent cat's eyes'. AMER INST PHYSICS PHYSICS OF FLUIDS, 20 (2) Article number ARTN 027101 • . (2007) 'Linear and nonlinear decay of cat's eyes in two-dimensional vortices, and the link to Landau poles'. Cambridge University Press Journal of Fluid Mechanics, 593, pp. 255-279. #### Abstract This paper considers the evolution of smooth, two-dimensional vortices subject to a rotating external strain field, which generates regions of recirculating, cat's eye stream line topology within a vortex. When the external strain field is smoothly switched off, the cat's eyes may persist, or they may disappear as the vortex relaxes back to axisymmetry. A numerical study obtains criteria for the persistence of cat's eyes as a function of the strength and time scale of the imposed strain field, for a Gaussian vortex profile.In the limit of a weak external strain field and high Reynolds number, the disturbance decays exponentially, with a rate that is linked to a Landau pole of the linear inviscid problem. For stronger strain fields, but not strong enough to give persistent cat's eyes, the exponential decay of the disturbance varies: as time increases the decay slows down, because of the nonlinear feedback on the mean profile of the vortex. This is confirmed by determining the decay rate given by the Landau pole for these modified profiles. For strain fields strong enough to generate persistent cat's eyes, their location and rotation rate are determined for a range of angular velocities of the external strain field, and are again linked to Landau poles of the mean profiles, modified through nonlinear effects. • . (2007) 'Far downstream analysis for the Blasius boundary-layer stability problem'. QJMAM, 60 (3), pp. 255-274. #### Abstract In this paper, we examine the large Reynolds number (Re) asymptotic structure of the wave number in the Orr–Sommerfeld region for the Blasius boundary layer on a semi-infinite flat plate given by Goldstein (1983, J. Fluid Mech., 127, 59–81). We show that the inclusion of the term which contains the leading-order non-parallel effects, at O(Re− 1/2), leads to a non-uniform expansion. By considering the far downstream form of each term in the asymptotic expansion, we derive a length scale at which the non-uniformity appears, and compare this position with the position seen in plots of the wave number. • . (2006) 'Asymptotic receptivity analysis and the Parabolized Stability Equation : a combined approach to boundary layer transition'. J. Fluid Mech, 562, pp. 355-382. #### Abstract We consider the interaction of free-stream disturbances with the leading edge of a body and its effect on the transition point. We present a method which combines an asymptotic receptivity approach, and a numerical method which marches through the Orr–Sommerfeld region. The asymptotic receptivity analysis produces a three-deck eigensolution which in its far downstream limiting form produces an upstream boundary condition for our numerical parabolized stability equation (PSE). We discuss the advantages of this method compared to existing numerical and asymptotic analysis and present results which justify this method for the case of a semi-infinite flat plate, where asymptotic results exist in the Orr–Sommerfeld region. We also discuss the limitations of the PSE and comment on the validity of the upstream boundary conditions. Good agreement is found between the present results and the numerical results of Haddad & Corke (1998). ## Teaching Autumn: MAT3041 Fluid Mechanics Spring: MAT2050 Inviscid Fluid Dynamics ## Departmental Duties Marketing and Website Officer Applicant Day and Open Day Coordinator Athena Swan Committee Member Page Owner: mt0019 Page Created: Tuesday 12 April 2011 11:28:40 by kg0013
2014-04-24 00:38:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39197465777397156, "perplexity": 1632.1950551984748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-5-section-5-1-verifying-trigonometric-identities-exercise-set-page-660/96
## Precalculus (6th Edition) Blitzer The original identity is $1+{{\tan }^{2}}x=\frac{1}{{{\cos }^{2}}x}$. Let us consider one of the Pythagorean identities: $1+{{\tan }^{2}}x={{\sec }^{2}}x$ Now, using one of the reciprocal identities, which is $\sec x=\frac{1}{\cos x}$, we get: \begin{align} & 1+{{\tan }^{2}}x={{\sec }^{2}}x \\ & 1+{{\tan }^{2}}x=\frac{1}{{{\cos }^{2}}x} \\ \end{align}
2019-12-11 21:47:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 2694.7665690853096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540533401.22/warc/CC-MAIN-20191211212657-20191212000657-00123.warc.gz"}
https://mikebader.net/blog/tags/macros/
# nesting stata macros, or hacking a hash map Programming in Stata is relatively straightforward and this is partly because the programming syntax is both powerful and relatively straightforward. There are, however, a few minor annoyances in Stata's language including using the backtick and apostrophe to indicate local macros (i.e.,localname'). Among these shortcomings, I would argue that the lack of anything like a list in Stata's language is one of the largest. In most langauges, you can store a list of items and refer to the item in the list by some sort of index. This is particularly helpful for iterating over the same step multiple times. Lists generally come in two flavors: lists to which you ... # matching substrings entirely within stata At Orgtheory, Fabio asked about how to identify substrings within text fields in Stata. Although this is a seemingly simple proposal, there is one big problem, as Gabriel Rossman points out: Stata string fields can only hold 244 characters of text. As Fabio desires to use this field to analyze scientific abstracts, then 244 characters is obviously insufficient. Gabriel Rossman has posted a solution he has called grepmerge that uses the Linux-based program grep to search for strings in files. This is a great solution, but it comes with one large caveat: it cannot be used in a native Windows environment. This is because the grep` command ...
2022-08-08 12:48:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3825111389160156, "perplexity": 1759.232127225491}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570827.41/warc/CC-MAIN-20220808122331-20220808152331-00587.warc.gz"}
http://clay6.com/qa/19776/if-overrightarrow-a-2p-hat-i-hat-j-is-rotated-about-an-angle-of-theta-in-an
Browse Questions # If $\overrightarrow a=2p\hat i+\hat j$ is rotated about an angle of $\theta$ in anti clock wise direction and becomes $(p+1)\hat i+\hat j$ Since the vector $\overrightarrow a$ is rotated, its magnitude is same $\Rightarrow\:|2p\hat i+\hat j|=|(p+1)\hat i+\hat j|$ $\Rightarrow\:4p^2+1=p^2+2p+2$ $\Rightarrow\:3p^2-2p-1=0$ $\Rightarrow\:p=1$ or $p=\large\frac{1}{3}$ But if $p=1$ the vectors are same. $\therefore$ $p=\large\frac{1}{3}$
2016-10-27 14:55:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.857611894607544, "perplexity": 477.98718693986007}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721347.98/warc/CC-MAIN-20161020183841-00153-ip-10-171-6-4.ec2.internal.warc.gz"}
https://mvtrinh.wordpress.com/
Chris paid $\81$ for a dress that had been discounted $25\%$ and then marked down an additional $10\%$. Taking both discounts into consideration, determine the original price of the dress. Source: NCTM Mathematics Teacher, March 2008 Solution Let $x$ represent the original price. We apply the first discount then the additional markdown off the first discount. $81=(x-.25x)-(x-.25x)(.10)$ $=.75x-.075x$ $=.675x$ $x=81/.675=120$ Answer: $\120$ Alternative solution 1 This solution does not make any subtraction. $81=.9(x \times .75)$ $90=.75x$ $x=90/.75=120$ Andy is taking a multiple-choice test consisting of $10$ questions. All the items have four answer choices but only one correct answer. Unfortunately, Andy did not study for the test and plans on guessing the answer for each item. What is the probability that Andy will guess the answer for every item correctly? Source: NCTM Mathematics Teacher, March 2008 Solution Suppose the test has $1$ question Number of choices = $4$ $A\;B\;C\;D$ Probability of guessing every item correctly = $1/4$ $2$ questions Number of choices = $4^2$ $AA\;AB\;AC\;AD$ $BA\;BB\;BC\;BD$ $CA\;CB\;CC\;CD$ $DA\;DB\;DC\;DD$ Probability = $1/4^2$ $3$ questions Number of choices = $4^3$ $AAA\;AAB\;AAC\;AAD$ $ABA\;ABB\;ABC\;ABD$ etc. Probability = $1/4^3$ $10$ questions Number of choices = $4^{10}$ $AAAAAAAAAA\;AAAAAAAAAB\;AAAAAAAAAC\;AAAAAAAAAD$ $AAAAAAAAB\!A\;AAAAAAAAB\!B\;AAAAAAAABC\;AAAAAAAABD$ etc. Probability = $1/4^{10}=1/1048576\approx 0.0001\%$ or about $1$ in a million Answer: $\approx 0.0001\%$ Special Operation If $m\,\phi\,n=m^2-6mn+3n^3$, what is $-2\,\phi\,3$? Source: NCTM Mathematics Teacher, March 2008 Solution $-2\,\phi\,3=(-2)^2-6(-2)(3)+3(3)^3=4+36+81=121$ Answer: $121$ Identify Two Integers The sum of two integers is $-5$ and their product is $-24$. Identify both integers. Source: NCTM Mathematics Teacher, March 2008 Solution $24$ is small enough that we can exhaustively enumerate its factors $1\times 24$ $2\times 12$ $3\times 8$ $4\times 6$ The possible negative sums of the factors are $1+(-24)=-23$ $2+(-12)=-11$ $3+(-8)=-5$ $4+(-6)=-2$ The two integers are $3$ and $-8$. Answer: $3$ and $-8$ Alternative solution 1 Let $a$ and $b$ represent the two integers. Given $a+b=-5$ and $ab=-24$, we have $(a+b)^2=a^2+2ab+b^2$ $(-5)^2=a^2+2(-24)+b^2$ $25=a^2-48+b^2$ $a^2+b^2=73$ Consider the consecutive squares less than $73: 1,4,9,16,25,36,49,64$. The possible $a^2+b^2$ odd values are $1+64=65$ $9+64=73$ $25+36=61$ $49+64=113$ $49+36=85$ $49+16=65$ Only $a^2+b^2=9+64=73$. Hence $a=\pm 3$ and $b=\pm 8$. The two integers are $3$ and $-8$. Alternative solution 2 If two integers $a$ and $b$ are the solutions of a quadratic equation, we can write $x^2-(a+b)x+ab=0$. Substitute the values of $a+b$ and $ab$ $x^2+5x-24=0$ Solve the quadratic equation by factoring $(x-3)(x+8)=0$ $x=3$ and $x=-8$ Alternative solution 3 Let $x$ and $y$ represent the two integers. We have $x+y=-5\qquad (1)$ $xy=-24\qquad\:\:\,\, (2)$ Solving for $y$ in Eq. $(1)$ $y=-x-5$ Substitute the value of $y$ into Eq. $(2)$ $x(-x-5)=-24$ $x^2+5x-24=0$ Distance Graph For the graph shown below, create a graph showing distance traveled versus time Source: NCTM Mathematics Teacher, March 2008 Solution Distance = $\mathrm{speed}\times\mathrm{time}$. When speed is constant, distance increases linearly; the graph representing distance is a line. When speed is not constant and increases as a linear function of time, distance increases quadratically; the distance graph has the curvature of a parabola. A possible graph is shown below Story of Graph Write a story to accompany the graph below Source: NCTM Mathematics Teacher, March 2008 Solution Minute $0$ to $3$: speed increases rapidly $3$ to $15$: speed remains constant $15$ to $18$: speed increases again though at a lesser rate than before $18$ to $24$: speed remains constant Alternative solution From a complete stop, you begin driving your car at a steadily increasing rate. After approximately $2$ minutes, you reach your desired speed and set the cruise control. For the next $15$ minutes, you travel at a steady speed. At this point you notice that the speed limit has changed, so you steadily increase the speed of your car for the next $2$ minutes. Then you set the cruise control at the desired speed. Right Cylinder What is the volume and surface area of the right cylinder capped by two hemispheres picture below Source: NCTM Mathematics Teacher, March 2008 Solution Total volume = volume of cylinder + volume of two hemispheres $=\pi r^2 h+(4/3)\pi r^3$ $=\pi 6^2(12)+(4/3)\pi 6^3$ $=\pi (432+288)$ $=720\pi$ $=2262\:\mathrm{ft}^3$ Total surface area = surface area of cylinder + surface area of two hemispheres $=2\pi r h+4\pi r^2$ $=2\pi(6)(12)+4\pi 6^2$ $=\pi(144+144)$ $=288\pi$ $=905\:\mathrm{ft}^2$ Answer: volume = $2262\:\mathrm{ft}^3$; surface area = $905\:\mathrm{ft}^2$ Apples and Oranges Juanita is reviewing her grocery bills. Three oranges and four apples cost $\4.33$. Five oranges and three apples cost $\5.42$. What was the cost of each orange and each apple? Source: NCTM Mathematics Teacher, March 2008 Solution Let $r$ represent the cost of an orange and $a$ represent the cost of an apple. We have the following equations $3r+4a=4.33\qquad (1)$ $5r+3a=5.42\qquad (2)$ Multiply Eq. $(1)$ by $-3$ and Eq. $(2)$ by $4$ and add the equations $-9r-12a=-12.99$ $20r+12a=21.68$ ——————————- $11r=8.69$ $r=.79$ Substitute the value of $r$ into Eq. $(1)$ $3(.79)+4a=4.33$ $2.37+4a=4.33$ $4a=1.96$ $a=.49$ Answer: oranges cost $79$ cents each; apples $49$ cents each Photograph Five Students A Math Olympiad team is posing for its annual media photograph. How many ways are there to arrange the five members of the team in one line? Source: NCTM Mathematics Teacher, March 2008 Solution Choose one for the first position – $5$ ways Choose one for the second – $4$ ways Choose one for the third – $3$ ways Choose one for the fourth – $2$ ways Choose one for the fifth – $1$ way $5\times 4\times 3\times 2\times 1=120$ ways to arrange the five members of the team in one line. Answer: $120$ Mean, Median, and Mode Using the frequency table below, determine the mean, median, and modal income. $\mathrm{\underline{Income}}\quad\quad\:\:\mathrm{\underline{Frequency}}$ $\1,\!400,\!000\quad\quad\: 1$ $\520,\!000\quad\quad\quad 3$ $\125,\!000\quad\quad\quad 6$ $\85,\!000\quad\quad\quad\:\: 8$ $\45,\!000\quad\quad\quad\: 12$ Source: NCTM Mathematics Teacher, March 2008 Solution mean = $(1400000+(3)520000+(6)125000+(8)85000+(12)45000)/(1+3+6+8+12)$ $=164333.33$ We order the $30$ values from smallest to greatest. Since the count of numbers is even, the median equals the average of the fifteenth and sixteenth values. median = $(85000+85000)/2=85000$ mode = $45000$ because this value occurs the most — twelve times. Answer: mean=$\164,\!333.33;$ median = $\85,\!000$; mode = $\45,\!000$
2018-12-14 22:01:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 163, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5724506974220276, "perplexity": 1359.5478324012015}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826354.54/warc/CC-MAIN-20181214210553-20181214232553-00517.warc.gz"}
https://socratic.org/questions/the-length-of-a-postage-stamp-is-4-1-4-millimeters-longer-than-its-width-the-per
# The length of a postage stamp is 4 1/4 millimeters longer than its width. The perimeter of the stamp is 124 1/2 millimeters. What is the width of the postage stamp? What is the length of the postage stamp? The length and width of the postage stamp are $33 \frac{1}{4} m m \mathmr{and} 29 m m$ respectively. Let the width of the postage stamp be $x$ mm Then, the lengtth of the postage stamp be $\left(x + 4 \frac{1}{4}\right)$mm. Given perimeter is $P = 124 \frac{1}{2}$ We know perimeter of a rectangle is $P = 2 \left(w + l\right)$; where $w$ is the width and $l$ is the length. So $2 \left(x + x + 4 \frac{1}{4}\right) = 124 \frac{1}{2} \mathmr{and} 4 x + 8 \frac{1}{2} = 124 \frac{1}{2} \mathmr{and} 4 x = 124 \frac{1}{2} - 8 \frac{1}{2} \mathmr{and} 4 x = 116 \mathmr{and} x = 29 \therefore x + 4 \frac{1}{4} = 33 \frac{1}{4}$ The length and width of the postage stamp are $33 \frac{1}{4} m m \mathmr{and} 29 m m$ respectively.[Ans]
2021-12-08 03:55:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.606941282749176, "perplexity": 538.2971417002867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363437.15/warc/CC-MAIN-20211208022710-20211208052710-00136.warc.gz"}
https://cs.stackexchange.com/questions/30555/what-is-the-name-of-this-function-of-a-tree
# What is the name of this function of a tree? I've written a recursive function of a tree, and I would like to know what it's called! It's not quite the same as the height or the width of a tree, but it seems kind of like a width. Assuming the tree $T$ is binary, we can define the function $f(T)$ as follows. If $T$ has a only a single node, let $f(T) = 1$. Otherwise, let $T_0$ and $T_1$ be the two trees rooted at the child nodes $T$'s root. If $f(T_0) = f(T_1)$, then let $F(T) = F(T_0) + 1$; otherwise, let $f(T) = \max(f(T_0), f(T_1))$. The function $f$ is generalized as follows to trees that are not necessarily binary. As before, let $f(T) = 1$ when $T$ has no edges. Otherwise, let $T_0, \dots, T_k$ be the $k+1$ subtrees rooted at the child nodes of $T$'s root, sorted by descending values of $f$. Then let $f(T) = \max_i(f(T_i) + i)$. This is basically the Strahler number, which is great because now I can call my function ComputeStrahlerNumber instead of keeping the profane function name I was using as a placeholder.
2019-07-20 11:50:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9677135944366455, "perplexity": 272.07287397076396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526508.29/warc/CC-MAIN-20190720111631-20190720133631-00408.warc.gz"}
https://gowrishankar.info/blog/higher-cognition-through-inductive-bias-out-of-distribution-and-biological-inspiration/
### Higher Cognition through Inductive Bias, Out-of-Distribution and Biological Inspiration Posted April 24, 2021 by Gowri Shankar ‐ 12 min read The fascinating thing about human(animal) intelligence is its ability to systematically generalize things outside of the known distribution on which it is presumably trained. Instead of having huge list of hypothesis and heuristics, if intelligence can be explained with few principles - understanding intelligence and building intelligent machines will take an inspiring and evolutionary path. The inspiration for this post is… 1. Interview by Dr.Srinivasa Chakravarthy Head of Computation Neuroscience Lab at IIT, Madras and 2. Recent paper by Anirudh Goyal and Yoshua Bengio titled Inductive Biases for Deep Learning of Higher Level Cognition 3. Probabilistic Deep Learning with TensorFlow 2 course by Kevin Webster of Imperial College, London I would be proud, if one finds this post is a commentary on Goyal and Benjio’s above mentioned paper inspired by Dr.Srinivas Chakravarthy’s approach towards understanding brain function. Image Credit: Analytics India Mag ### Objective 1. Prologue to Artificial General Intelligence 2. What is an inductive bias? 3. How human cognitive system used inductive biases? 4. What are the inductive biases used in deep learning? 5. What is Out-of-Distribution? 6. Simulation of Human like Intelligence using a Hypothesis space 7. Biologically inspired Deep Learning Architectures 8. What is Attention? 9. Overview to Interpretable Multi-Head Attention 10. Introduction to Multi-Horizon Forecasting using Temporal Fusion Transformers ## Introduction The beauty of deep learning systems is their inherent ability to learn and exploit the inductive biases from the train/validation dataset to achive certain level of convergence, that never yielded 100% accuracy. The in-distribution learning is the root cause for poor performance that lead to lack of generalization. Even for a simple linear equation $y = mx + c$ with clearly set prior knowledge, post training accuracy is not of appreciating level. Reference: Finding Unknown Differentiable Equation - Automatic Differentiation Using Gradient Tapes The key difference between the brain function and deep learning algorithms is the absence of backward pass(backpropagation) in the human cognitive system. I often ponder, Is evolution is nothing but a function of backpropagation happened for eons to bring us to wherever we are today in the form of humans(animals). I do not mind being called crazy and stupid to ask that question - I will be grateful, If I find an answer during my time over here part of my long journey. Further, I believe human system is nothing but a bunch of sensors and actuators with a highly optimized PID controller called Brain for out-of-distribution scenarios to achieve generalization through transfer learning. ### We Evolved Under Pressure While great mammoths, neanderthals were going extinct, cockroaches and humans are thriving - facing similar challenges and threats reinforced the fact survival of the fittest. It happened due to the nature of that particular species for its ability to interact with multiple agents, newer environments, learn through competition and survive under constant pressure to achieve higher level of flexibility, robustness and adaptability. Somewhere, we do not know how, where and when - human evolutionary process diverged from rest of the species to achieve systematic generalization through decomposition of knowledge into smaller pieces which can be recomposed dynamically to reason, imagine and create at an explicit level to achieve higher cognition. Image Credit: Artificial Intelligence or the Evolution of Humanity ### Beyond Data - Multiple Hypothesis and Hypothesis Space Deep learning brought remarkable progress but needs to be extended in qualitative and not just quantitative ways(larger datasets and more computing resources). Further says, having larger and more diverse datasets is important but insufficient without good architectural inductive biases. - Goyal et al At present we are quite successful and achieving good accuracy metrics for narrow tasks within boundaries under a context through • Supervised learning where dataset is labeled or • Reward and penalty based architectures like reinforcement learning This is quite contrary and tangential to the nature of human learning system, the past knowledge - or the inductive priors of human cognitive system allows them to quickly generalize on new tasks and reuse previously acquired knowledge. The dataset restricts the learner’s hypothesis space - It may be large enough for a context specific problem but quite small to ensure reliable generalization. Jonathan Baxter in his paper titled A Model of Inductive Bias Learning proposes a model and architectural scheme for bias learning. These models typically takes the following form… A training dataset $z$ drawn independenty according to an underlying distribution $P$ on $\mathcal{X} \times \mathcal{Y}$ where $x_i \in \mathcal{X}$ and $y_i \in \mathcal{Y}$ spaces $$z = {(x_1, y_1), \cdots, (x_m, y_m)}$$ Then the learning agent is supplied with a hypothesis space $\mathcal{H}$ Based on the information contained in $z$, the learning agent’s goal is to select a hypothesis $h: \mathcal{X} \rightarrow \mathcal{Y}$ by minimizing certain loss. For example. this loss $er_P(h)$ could be a squared loss $$\large er_P(h):= \mathbf{E}_{(x, y)~P}(h(x) - y)^2$$ i.e. the empirical loss of $h$ on $z$ to be minimized is $$\hat{er}_z(h) = \frac{1}{m} \sum_1^m l(h(x_i), y_i) \tag{1.Emperical Loss}$$ In such models the learner's bias is represented by the choice of H, if H does not contain a good solution to the problem, then, regardless of how much data the learner receives it cannot learn. - Jonathan Baxter The best way to model the inductive bias learning is to learn an environment of related learning tasks by supplying a family of hypothesis spaces $\mathcal{H} \in \mathbf{H}$ to the learning agent. Bias, that is learnt on such environments via sufficiently many training tasks is more likely to pave way for learning novel tasks belong to that environment. for any sequence of $n$ hypothesis $(h_1, \cdots, h_n)$ the loss function is $$(h_1, \cdots, h_n)_l ((x_1, y_1), \cdots, (x_m, y_m)) = \frac{1}{n} \sum_1^n \hat{er}_z(h_i) \tag{2. Overall Loss}$$ Even the approach presented above is quite a statistical learning framework with a twist by adding more hypothesis to the learning agent. This is still not sufficient for generalization unless we bring notions about learning agent and causality - even if the application is trying to classify cats and dog, the green pastures, sky and other stationary objects in the image should be part of the learning policy. This approach leads us not to think the data as set of examples drawn independently from the same distribution but reflect the real world. ## General Purpose Learning It is practically impossible to model a complete general-purpose learning system due to the fact they will implicitly or explicitly generalize better on certain distribution and fail on others. Is that why human beings are highly biased? Ain’t we all agree that the most sophisticated general purpose system is the human system. The question for AI research aiming at human-level performance then is to identify inductive biases that are most relevant to the human perspective on the world around us. - Goyal et al ### What are Inductive Biases We have a clarity that the past knowledge fed to the learning agent is all inductive bias is all about, following are the few concrete examples of inductive biases in deep learning • Patterns of Features: Distributed representation of environment through various feature vectors • Group Equivariance: The process of convolution over the 2D space • Composistion of Functions: Deep architectures that perform various function to extract information • Equivariance over Entities and Relations: Graph Neural Networks • Equivariance over Time: Recurrent Networks • Equivariance over Permutations: Soft Attention These biases are encoded using various techniques like explicit regularisation objectives, having architectural constraints, parameter sharing, prior distribution in a Bayesian setup, convolution and pooling, data augmentation etc. For example, in a transfer learning setup - With a strong inductive bias, The first half of the training set already strongly contains what the learner should predict on the second half of the training set, which means that when we encounter these examples, their error is already low and training thus converges faster. - Goyal et al State of the art Deep Learning architectures for Object Detection and Natural Language Processing extracts information from past experiences and tasks to improve its learning speed. A significant level of generalization transferred to specific sample set of related task to the learner to give an idea • What they have in common, what is stable and stationary across the environments experiences • How they differ or how changes occur from one to the next in case we consider a sequential decision making scenarios ### Out-of-Distribution Generalization It is evident, generalization can be achieved only when we draw training observations outside of the specific distributions. The paper suggests OOD through sample complexity while facing new tasks or changing the distributions, • 0-Shot OOD and • k-Shot OOD Achieving the sample complexity is studied in linguistics and the notion is called as systematic generalization. i.e. for a novel composition of existing concepts(e.g. words) can be derived systematically from the meaning of the composed concepts. Its nothing but comprehending a particular concept through various other factors which are not directly related to the concept of interest. For e.g. auditory lobe auguments the sound of a crow to classify the bird when it flies in the vicinity of the person. This level of generalization makes it possible to decipher new combinations of zero probability under the training distribution. For e.g. creativity, innovation, new ideas etc or even science fiction. Emperical studies of such generalizations are performed by Bahdanau et al in the lingustics and nlp domain, with results no near comparable to human cognition. ## Biological Inspiration - Attention In this section, we shall explore the biological inspiration in deeplearning models. Focusing equivariance over permutation process of Attention. Attention is a concept from cognitive sciences, selective attention illustrates how we focus our attention to particular objects in the surroundings. This mechanism helps us to concentrate on things that are relevant and important and discard the unwanted information. Image Credit: blog.floydhub.com The most famous applications of Attention mechanism are • Image Captioning: Based on the objects and their importance, the system generates a caption for the image • Neural Machine Translation: Focus on the right few words to translate the left ones for English to French kind of translation. • Multi Horizon Forecasting: It is similar to NMT, here the forecasting of next few values are done based on the historical sequence ### Attention for Multi Horizon Forecasting of Temporal Data An attention function can be described as mapping a query and set of key-value pairs to an output, where the query, keys, values and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key. - Vaswani et al, Attention is All You Need Multi-horizon forecasting(MHF) often contains a complex mix of inputs - including static covariates, known future inputs and other exogenous time series that are only observed in the past - without any prior information on how they interact witht the target. There are many DL architectures published for multi-horizon forecasting problems but most of them are black box models. In this section, we shall see Interpretable Multi Head Attention for Temporal Fusion Transformer(TFT). TFT is a novel attention based architecture which combines high performance multi-horizon forecasting with interpretable insights into temporal dynamics. To learn temporal relationship at different scales, TFT uses Recurrent Neural Network(RNN) for local processing and interpretable self-attention layers for long-term dependencies. It utilizes specialized components to select relevant features and a series of gating layers to suppress unnecessary components, enabling high performance in a wide range of scenarios. The full details of TFT are beyond the scope of this article, we shall see the Interpretable Multi Head Attention in detail. TFT employs a self-attention mechanism to learn long-term relationships across different time steps, which we modify from multi-head attention in transformer based architectures[17, 12] to enhance explainability. The Q, K and V • Attention mechanisms scales values ${V} \in \mathbb{R}^{N \times d_v}$ relationship between keys(K) and queries(Q) • $K \in \mathbb{R}^{N \times d_{attn}}$ is the Key • $Q \in \mathbb{R}^{N \times d_{attn}}$ is the Query $$Attention({Q, K, V}) = A({Q,K})V\tag{3}$$ Image Credit: Attention is All You Need Where, • $A()$ is the normalization function - A common choice is scaled dot-product attention $$A({Q,K}) = softmax \left(\frac{QK^T}{\sqrt{d_{attn}} } \right)\tag{4. Attention}$$ Multi Head Attention is proposed in employing different heads for different representation subspaces to increase the learning capacity $$MultiHead{(Q,K,V)}) = [H_1, \cdots, H_{m_H}]W_H\tag{5}$$ $$i.e.$$ $$H_h = Attention(QW^{(h)}_Q, KW^{(h)}_K, VW^{(h)}_V) \tag{6. Multi-Head Attention}$$ Where, • $W_K^{(h)} \in \mathbb{R}^{d_{model} \times d_{attn}}$ is head specific weights for keys • $W_Q^{(h)} \in \mathbb{R}^{d_{model} \times d_{attn}}$ is head specific weights for queries • $W_V^{(h)} \in \mathbb{R}^{d_{model} \times d_{V}}$ is head specific weights for values $W_H \in \mathbb{R}^{(m_h.d_V) \times d_{model}}$ linearly combines outputs contatenated from all heads $H_h$ 1. Since different values are used in each head, attention weights alone is not indicative of a feature’s importance 2. Multi-head attention to share values in each head, and employ additive aggregation of all heads $$InterpretableMultiHead(Q, K, V) = \tilde H \tilde{W}_H \tag{7.Interpretable MH}$$ $\tilde H = \tilde A(Q, K)V W_V \tag{8}$ $$\tilde{H} = \huge { \normalsize \frac{1}{H} \sum_{h=1}^{m_H} A(QW_Q^{(h)}, KW_K^{(h)}) \huge }\normalsize VW_V \tag{9}$$ $$\tilde H = \frac{1}{H} \sum^{m_H}_{h=1} A(QW^{(h)}_Q, KW^{(h)}_K, VW_V)\tag{10}$$ Where, • $W_v \in \mathbb{R}^{d_{model} \times d_V}$ are value weights shared across all heads • $W_H \in \mathbb{R}^{d_{attn} \times d_{model}}$ is used for final linear mapping 1. Through this, each head can learn different temporal patterns, while attending to a common set of input features. 2. These features can be interpretted as a simple ensemble over attention weights into combined matrix $\tilde A(Q, K)$ Eq.8. 3. Compared to $A(Q, K)$ Eq.4 in $\tilde A(Q, K)$ Eq.8 yields an increased representation capacity in an efficient way ## Inference This post is quite abstract and a prologue to Artificial General Intelligence(AGI) with Out-of-Distribution as the focus area. It further discussed the inductive biases in current scheme of deep learning algorithms like transfer learning and mutiple hypothesis, hypothesis spaces. Further, It introduced Attention mechanism for time series forecasting. In the future posts, we shall discuss the inductive biases and out of distribution schemes in detail.
2021-09-18 17:13:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3835045099258423, "perplexity": 1981.6592811127118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056548.77/warc/CC-MAIN-20210918154248-20210918184248-00356.warc.gz"}
https://socratic.org/questions/how-do-you-solve-4x-5-11-2#453916
# How do you solve -4x - 5= - 11- 2? Jul 20, 2017 $x = 2$ #### Explanation: $- 4 x - 5 = - 11 - 2$ Let's combine $- 11$ and $- 2$ $- 4 x - 5 = - 13$ We need to isolate for $x$ add 5 to both sides $- 4 x = - 8$ divide by $- 4$ on both sides $x = \frac{- 8}{-} 4$ or $2$ $\cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot$ To check our work, let's plug solve the equation, substituting $\frac{1}{2}$ for $x$: $- 4 x - 5 = - 11 - 2$ $- 4 \left(\textcolor{red}{2}\right) - 5 = - 11 - 2$ $- 8 - 5 = - 11 - 2$ $- 13 = - 13$ We were right! $x = 2$
2021-12-01 13:25:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 18, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9401137828826904, "perplexity": 2593.035093545185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360803.0/warc/CC-MAIN-20211201113241-20211201143241-00375.warc.gz"}
https://scicomp.stackexchange.com/questions/25746/whats-the-fastest-implementation-of-elementwise-vector-multiplication-in-fortra/25751
# What's the fastest implementation of elementwise vector multiplication in Fortran? My fortran code contains lines like the following integer, parameter :: dbp = kind(1.0d0) integer, parameter :: n = 1 000 000 real(dbp) :: x(n), y(n), z(n) y(:) = x(:) * z(:) I would like to take advantage (if possible) of some optimised maths libraries to carry out this operation. I have found a lapack routine dgbmv which multiplies a matrix by a vector. This would suit my needs if I create a diagonal matrix such that $$\left( \begin{array}{c} y_1 \\ y_2 \\ \vdots \\ y_n \end{array} \right) = \left( \begin{array}{ccc} x_1 & & & & \\ & x_2 & & & \\ & & \ddots & & \\ & & & x_n & \\ \end{array} \right)\left( \begin{array}{c} z_1 \\ z_2 \\ \vdots \\ z_n\end{array} \right)$$ But I don't know if this is the best way to go about calculating x(:)*z(:). Is there a more appropriate way? • It looks like you're just doing an element-wise vector multiplication, which would be `y = x*z' - no external library required. If you already have x as a vector instead of a matrix, just use it directly. Why do you need lapack for this? – cbcoutinho Dec 8 '16 at 16:54 • My motivation is optimisation. Currently I just use standard fortran, but I am wondering if using lapack (or some other library) would make this operation faster. – DJames Dec 8 '16 at 16:56 • No, it cannot be made faster by using a library. What you are doing is simply an element-wise vector multiplication, which is already a very fast instruction (and a very basic one). The only things that can make it faster is to use compiler options or parallelism. Compiling with -Ofast should help increase the speed of the multiplication. Otherwise, you could transform your element-wise vector multiplication into a loop and render that loop parallel with OpenMP. There is no case where creating a matrix to then apply the multiplication could be in any way faster. – BlaB Dec 8 '16 at 17:45 • Since you are doing element wise operation, you could check vectorization capabilities of your platform, like Intel AVX and use specialised instruction. – Moonwalker Dec 8 '16 at 20:45 The cost of the multiplication is almost insignificant compared to the cost of loading the data from memory (and writing it back). If you're worried about performance, you should be thinking about data locality. Perform more flops with each of your $x_i$ and $z_i$ values (if that's possible) when you load them from memory and before proceeding on. Setting up a diagonal matrix will at best make no difference, but is more likely to disastrously degrade performance. You are doing an element wise operation between two vectors, so if is possible is better use functions designed for vectors. The library Lapack is for linear algebra, but it is made for operations or methods with an high level respect this basic operation. For an optimized form of your code you can try to use some compiler option as BlaisB suggested. An other way can be to use more basic level library, similar Blas (note the Lapack is builded over Blas). For example MKL has got v?mul that performs a vector-vector element wise multiplication. See also this question in MKL forum. Rephrasing what other people have said, you can treat diagonal matrices as vectors, which will reduce the memory required to store them, and the number of computations unless you are using sparse routines. You can follow similar approaches with other structured matrices like tridiagonal matrices.
2020-08-06 17:11:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5075525045394897, "perplexity": 944.4026624001501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439736972.79/warc/CC-MAIN-20200806151047-20200806181047-00255.warc.gz"}
https://www.lmfdb.org/Variety/Abelian/Fq/2/5/af_q
# Properties Label 2.5.af_q Base Field $\F_{5}$ Dimension $2$ Ordinary Yes $p$-rank $2$ Principally polarizable Yes Contains a Jacobian No ## Invariants Base field: $\F_{5}$ Dimension: $2$ L-polynomial: $( 1 - 3 x + 5 x^{2} )( 1 - 2 x + 5 x^{2} )$ Frobenius angles: $\pm0.265942140215$, $\pm0.352416382350$ Angle rank: $2$ (numerical) Jacobians: 0 This isogeny class is not simple. ## Newton polygon This isogeny class is ordinary. $p$-rank: $2$ Slopes: $[0, 0, 1, 1]$ ## Point counts This isogeny class is principally polarizable, but does not contain a Jacobian. $r$ 1 2 3 4 5 6 7 8 9 10 $A(\F_{q^r})$ 12 864 21312 432000 9689052 239376384 6059560092 152549568000 3817589596992 95392127486304 $r$ 1 2 3 4 5 6 7 8 9 10 $C(\F_{q^r})$ 1 33 166 689 3101 15318 77561 390529 1954606 9768153 ## Decomposition and endomorphism algebra Endomorphism algebra over $\F_{5}$ The isogeny class factors as 1.5.ad $\times$ 1.5.ac and its endomorphism algebra is a direct product of the endomorphism algebras for each isotypic factor. The endomorphism algebra for each factor is: All geometric endomorphisms are defined over $\F_{5}$. ## Base change This is a primitive isogeny class. ## Twists Below are some of the twists of this isogeny class. Twist Extension Degree Common base change 2.5.ab_e $2$ 2.25.h_ce 2.5.b_e $2$ 2.25.h_ce 2.5.f_q $2$ 2.25.h_ce Below is a list of all twists of this isogeny class. Twist Extension Degree Common base change 2.5.ab_e $2$ 2.25.h_ce 2.5.b_e $2$ 2.25.h_ce 2.5.f_q $2$ 2.25.h_ce 2.5.ah_w $4$ 2.625.cl_cwm 2.5.ab_ac $4$ 2.625.cl_cwm 2.5.b_ac $4$ 2.625.cl_cwm 2.5.h_w $4$ 2.625.cl_cwm
2020-05-26 04:33:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9656864404678345, "perplexity": 3679.7177698033875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390442.29/warc/CC-MAIN-20200526015239-20200526045239-00101.warc.gz"}
http://clay6.com/qa/15376/a-compound-on-analysis-was-found-to-have-the-following-composition-i-na-14-
Comment Share Q) # A compound on analysis,was found to have the following composition :(i) Na=14.31% (ii) S=9.97% (iii) O=69.50% (iv) H=6.22%.Calculate the molecular formula of the compound assuming that whole of hydrogen in the compound is present as water of crystallization .Molecular mass of the compound is 322. $\begin{array}{1 1}(a)\;Na_2SO_4.10H_2 O&(b)\;Na_2SO_4\\(c)\;NaSO_49H_2O&(d)\;Na_2SO_3 10H_2O\end{array}$ Comment A) The empirical formula $=Na_2SH_{20}O_{14}$ Empirical formula mass =$(2\times 23)+32+(20\times 1)$ $\qquad\qquad\qquad\qquad\;=322$ $n=\large\frac{Mol.mass}{Emp.mass}$ $\;\;\;=\large\frac{322}{322}$ $\;\;\;=1$ $\therefore$ Molecular formula =$Na_2SH_{20}O_{14}$ So molecular formula =$Na_2SO_4.10H_2 O$
2019-01-22 06:31:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.777134895324707, "perplexity": 13498.477987998263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583829665.84/warc/CC-MAIN-20190122054634-20190122080634-00482.warc.gz"}
https://chem.libretexts.org/Bookshelves/Introductory_Chemistry/Book%3A_Introductory_Chemistry_(CK-12)/05%3A_Electrons_in_Atoms/5.14%3A_Aufbau_Principle
# 5.14: Aufbau Principle Construction of a building begins at the bottom. The foundation is laid and the building goes up step by step. You obviously cannot start with the roof since there is no place to hang it. The building goes from the lowest level to the highest level in a systematic way. ### Aufbau Principle In order to create ground state electron configurations for any element, it is necessary to know the way in which the atomic sublevels are organized in order of increasing energy. Figure $$\PageIndex{1}$$ shows the order of increasing energy of the sublevels. Figure $$\PageIndex{1}$$: Electrons are added to atomic orbitals in order from low energy (bottom of the graph) to high (top of the graph) according to the Aufbau principle. Principle energy levels are color coded, while sublevels are grouped together and each circle represents an orbital capable of holding two electrons. The lowest energy sublevel is always the $$1s$$ sublevel, which consists of one orbital. The single electron of the hydrogen atom will occupy the $$1s$$ orbital when the atom is in its ground state. As we proceed with atoms with multiple electrons, those electrons are added to the next lowest sublevel: $$2s$$, $$2p$$, $$3s$$, and so on. The Aufbau principle states that an electron occupies orbitals in order from lowest energy to highest. The Aufbau (German: "building up, construction") principle is sometimes referred to as the "building up" principle. It is worth noting that in reality atoms are not built by adding protons and electrons one at a time and that this method is merely an aid for us to understand the end result. As seen in the figure above, the energies of the sublevels in different principal energy levels eventually begin to overlap. After the $$3p$$ sublevel, it would seem logical that the $$3d$$ sublevel should be the next lowest in energy. However, the $$4s$$ sublevel is slightly lower in energy than the $$3d$$ sublevel and thus fills first. Following the filling of the $$3d$$ sublevel is the $$4p$$, then the $$5s$$ and the $$4d$$. Note that the $$4f$$ sublevel does not fill until just after the $$6s$$ sublevel. Figure $$\PageIndex{2}$$ is a useful and simple aid for keeping track of the order of fill of the atomic sublevels. Figure $$\PageIndex{2}$$: The Aufbau principle is illustrated in the diagram by following each red arrow in order from top to bottom: $$1s$$, $$2s$$, $$2p$$, $$3s$$, etc. ### Summary • The Aufbau principle gives the order of electron filling in an atom. • It can be used to describe the locations and energy levels of every electrons in a given atom. ### Contributors • CK-12 Foundation by Sharon Bewick, Richard Parsons, Therese Forsythe, Shonna Robinson, and Jean Dupon.
2019-01-18 07:32:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5553780794143677, "perplexity": 573.4405937944213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659944.3/warc/CC-MAIN-20190118070121-20190118092121-00232.warc.gz"}
http://math.stackexchange.com/questions/188046/trying-to-find-angle-at-which-projectile-was-fired
Trying to find angle at which projectile was fired. So let's say I have a parabolic function that describes the displacement of some projectile without air resistance. Let's say it's $$y=-4.9x^2+V_0x.$$ I want to know at what angle the projectile was fired. I notice that $$\tan \theta_0=f'(x_0)$$ so the angle should be $$\theta_0 = \arctan(f'(0)).$$ or $$\theta_0 = \arctan(V_0).$$ Is this correct? I can't work out why it wouldn't be, but it doesn't look right when I plot curves. - If you intend -9.8 to be the acceleration of gravity, the altitude it contributes should be $\frac {-9.8}2 x^2$ (where you are using $x$ for time, not horizontal distance) –  Ross Millikan Aug 28 '12 at 19:43 oops! Yep. I forgot. :) –  Korgan Rivera Aug 28 '12 at 20:03 As the others say, if $x$ is time you are working in one dimension and there is no $\theta_0$ If you are in two dimensions, you need $x$ and $t$. –  Ross Millikan Aug 28 '12 at 20:10 @Ross: If the OP meant $x$ to be time, you're correct. However, it seems more likely to me that they really did mean $x$ to be horizontal position, in which case $\theta=\arctan f'(x_0)$ does, in fact, give the angle of the trajectory (up to sign, anyway) at position $(x_0,f(x_0))$. –  Ilmari Karonen Aug 29 '12 at 1:11 @IlmariKaronen: but then the first equation doesn't work in units at all. The acceleration of gravity shouldn't multiply distance^2 to get distance. OP edited in response to my first comment, which seems to confirm that $x$ is time. –  Ross Millikan Aug 29 '12 at 1:52 It helps to define your terms. $x$ looks like a distance, but you are using it as time in your first equation. I would start with let $t$ be the time after the projectile is launched. Let $\vec {V_0}=V_{0x} \hat i+V_{0y} \hat j$ be the initial velocity. We have $V_{0x}=|V_0| \cos \theta_0, V_{0y}=|V_0| \sin \theta_0$. Then the equation of motion is $y=-\frac {9.8}2 t^2+V_{0x}t$. It is true that $\theta_0=\arctan \frac {V_{0y}}{V_{0x}}$ - From what you are saying, it looks like you are thinking of $f(x)$ as a real valued function, and in effect you are only considering linear motion. (This would be fine, if the cannon were firing straight up or straight down or straight up.) However, you are interested in knowing the interesting two-dimensional trajectories, and this means that $V_0$ is a 2-D vector pointing in the direction of the initial trajectory, and that the coefficient of $x^2$ actually accompanies a vector which is pointing straight down (to show the contribution of gravity.) Once you find what $V_0$ is, you can then compute its angle with respect to horizontal, and have your answer. - As you only have one equation to denote the position I assume you are working in one dimension. If you are working in one dimension, the initial angle is $0$ or $\frac{\pi}{2}$, if your $y$ denotes the height of the position. For a 2 dimensional trajectory you need two equations to denote the position of a projectile. If you shoot a projectile at speed $v_0$ with an angle $\alpha$, the position $(x,y)$ is $$x=x_0+v_0t\cos\alpha$$ $$y=y_0+v_0t\sin\alpha+a{t}^{2}$$ where $(x_0,y_0)$ is the initial position, $a$ is the gravity and $t$ is the time. If you know $v_x=v_0\cos\alpha$ and $v_y=v_0\sin\alpha$, the initial angle $\alpha$ would be $\arctan\frac{v_y}{v_x}$. You could do a function that denotes the angle in function of time $\alpha(t)=\arctan\frac{v_0\sin\alpha+2at}{v_0\cos\alpha}$. The angle in the vertex of the parabola that describes the projectile should be $0^o$, as $v_y$ should be zero. So $\alpha(\frac{-v_0\sin\alpha}{2a})=0$. If $x_0=0$ and $y_0=0$, then $\alpha(\frac{-v_0\sin\alpha}{a})=-\arctan\frac{v_y}{v_x}$. Summarizing, with your equation, the trajectory is vertical so the angle $\alpha$ would be $\lim_{x\to 0}\arctan\frac{V_0}{x.V_0} = \frac{\pi}{2}$ as $x$ does not change. - Yes, it's correct. In fact, it would be correct, by definition, for any trajectory: if the path of an object in the $(x,y)$ plane can be written as $y = f(x)$ for some differentiable function $f$, and the object is moving along the path in the direction of increasing $x$, then the direction the object moves in (expressed as an angle from the horizontal) when it crosses the line $x = x_0$ is given by $\arctan f'(x_0)$. As for why your results might not look like what you expect, well, the first question that comes to mind is whether your $\arctan$ function returns degrees or radians. Being off by a factor of $180/\pi$ would certainly make the results look odd. If that's not the issue, then the next thing I'd check is whether you actually calculated $f'$ correctly. Ps. I'm assuming that you did, in fact, mean $x$ to be horizontal position rather than time. However, if so, your definition of $f$ is somewhat odd. Generally, if a projectile is fired at time $t_0$ from position $(x_0,y_0)$ with speed $v_0$ and angle $\theta$, and is pulled down by gravity with acceleration $g$, the equations of motions are \begin{aligned} x &= x_0 + v_0 \cos(\theta)\, t \\ y &= y_0 + v_0 \sin(\theta)\, t - \frac12 g t^2, \end{aligned} the first of which we may solve for $t$ and substitute into the second to get the trajectory \begin{aligned} y &= y_0 + v_0 \sin(\theta) \frac{x-x_0}{v_0 \cos(\theta)} - \frac12 g \left(\frac{x-x_0}{v_0 \cos(\theta)}\right)^2 \\ &= y_0 + \tan(\theta) (x-x_0) - \frac{g}{2v_0^2\cos^2(\theta)}(x-x_0)^2. \end{aligned} Assuming that $x_0 = y_0 = 0$, and comparing the result with your original equation, we see that the coefficient $\tan(\theta)$ corresponds to your $V_0$, which is thus not a velocity but a slope, while the coefficient $-9.8$ (or $-4.9$ as you edited it) corresponds not to $-g$ or $-g/2$, but to $-g/(2v_0^2\cos^2(\theta))$. Only in the special case where the horizontal velocity component $v_0\cos(\theta) = 1$ are the latter two expressions equal (as they should be, given that in that case $t = x$). -
2014-07-23 22:22:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.982475757598877, "perplexity": 168.8831275261265}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997883858.16/warc/CC-MAIN-20140722025803-00000-ip-10-33-131-23.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2255761/finding-torsion-coefficients-for-mathbbz-4-times-mathbbz-9
# Finding torsion coefficients for $\mathbb{Z}_4\times\mathbb{Z}_9$ The part of the decomposition theorem $11.12$ corresponding to the subgroups of prime power order can also be written in the form $\mathbb{Z}_{m_1}\times \mathbb{Z}_{m_2}\times \cdots \times \mathbb{Z}_m$, where $m_i$ divides $m_{i+1}$ for $i=1,2,\cdots,r-1$ The numbers $m_i$ can be shown to be unique, and are the torsion coefficients of $G$ -Find the torsion coefficents of $\mathbb{Z}_4\times\mathbb{Z}_9$ The theorem $11.12$ is concerned about a finitely generated abelian group being isomorphic to $$\mathbb{Z}_{(p_1)^{r_1}}\times \mathbb{Z}_{(p_2)^{r_2}}\times \cdots \times \mathbb{Z}_{(p_n)^{r_n}}\times \mathbb{Z}\times \mathbb{Z} \cdots \times \mathbb{Z}$$ How can I represent $\mathbb{Z}_4\times\mathbb{Z}_9$ in the form $\mathbb{Z}_{m_1}\times \mathbb{Z}_{m_2}\times \cdots \times \mathbb{Z}_m$? Since $(4,9)=1$, $\Bbb{Z}_4\times \Bbb{Z}_9\cong \Bbb{Z}_{36}$ and also $\Bbb{Z}_4$ and $\Bbb{Z}_9$ cannot be decomposed into cyclic factors of smaller size. Hence $36$ is the torsion coefficient. Theorem: $(m,n)=1$ iff $\Bbb{Z}_m\times \Bbb{Z}_n\cong \Bbb{Z}_{mn}$ Remark:
2019-08-23 02:55:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9241718053817749, "perplexity": 181.0378560628797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317817.76/warc/CC-MAIN-20190823020039-20190823042039-00315.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-algebra/chapter-9-roots-and-radicals-9-1-roots-and-radicals-problem-set-9-1-page-401/66
## Elementary Algebra $12\sqrt{7}$ RECALL: The distributive property states that for any real numbers a, b, and c: $ac+bc=(a+b)c$ The factor common to all terms is $\sqrt{7}$. Use the rule above to simplify and obtain: $=(8+13-9)\sqrt{7} \\=12\sqrt{7}$
2019-04-22 04:11:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5389482975006104, "perplexity": 217.1124518584953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578534596.13/warc/CC-MAIN-20190422035654-20190422061552-00040.warc.gz"}
https://www.physicsforums.com/threads/isolated-and-non-isolated-conductors.762685/
# Isolated and non isolated conductors 1. Jul 22, 2014 I picked these questions below from a chunk of exams most of them are solved except this one. Questions 2 and 5 are confusing me. In question 2 s1 is supposed to be isolated and in 5 its not . if any body can explain to me briefly what is happening or can solve them to me he will be so appreciated. thanks 2. Jul 24, 2014 ### collinsmark Question 2: The shell (the question calls it a "hollow sphere," I'm calling it a "shell" -- same thing), S2 is isolated and has no net electric charge on it (it is initially neutral). a- Determine the charge distribution one the sphere, S1 and also the shell S2. Hint: Use Gauss' law for this. Also note that the electric field within a conducting material is always zero (in the material itself -- this doesn't necessarily apply to hollow portions [i.e. cavities] inside the material, but rather just inside the material itself). [Edit: it doesn't necessarily apply to the surface either. You know that the shell S2 has no net charge, but it might very well have equal but opposite charges on its inner and outer surfaces.] b- Determine the new potential of the inner sphere. Hint: you'll need to do some integration as part of the solution. How to you calculate the potential from a given electric field? Assume the potential at infinity is zero. c- Same as part b, but you only need to integrate to the shell, S2. Question 5: a- The potential of the inner sphere S1 is brought back to potential V1. The shell S2 is has zero net charge. Find the charge on S1. b- Same situation as above, but now the shell S2 is brought to a potential of zero. Find the charge on S1. Last edited: Jul 24, 2014 3. Jul 25, 2014 Lets begin from question 1 : my solution : 1) V∞ -V1=0 E.dr what I got is V1=Q1/4∏εR1. then Q1=4∏εR1*V1 (but since in question 5 he said that s1 is no more isolated rather maintained at potential V1 then I'm not sure about this solution). 2) a) from gauss law E*0rds=Qin/ε (where r>R2) then Qin=Q1+Q' and as E must be zero inside the "shell" then Qin=0 and then Q'=-Q1.(Q' is the charge on the inner surface of the shell and I chose spherical gussian surface surrounding sphere of radius R2 consequently charge on the surface of the shell will be +Q1 ) b) V∞-V1'=-R1E.dr we get that V1=Q1/4∏εR1. c) V2-V1'=-R1R2E.dr we get that V2=Q1/4∏εR2. 3) a) if V2=0 then charges on the surface of the shell will be zero then Q2 =0. b) V"1=R1R2E.dr V"1=(Q1/4∏ε)*(1/R2-1/R1). Lets skip Question four. In question 5 I didn't know what to do. I'm not sure about the solutions I wrote above so if you can help please and thanks btw. Last edited: Jul 25, 2014 4. Jul 25, 2014 ### collinsmark That looks about right to me so far. If I understand, yes you are correct. Your previously calculated value of Q1 is on the outside of the central sphere. -Q1 exists on the inside of the S2 shell and +Q1 exists on the outside of the S2 shell. I think you are missing a few steps here. You need to break the integral up into three parts, since there are three different regions of electric field. Infinity to R3 R3 to R2 and R2 to R1 You can then sum the individual results together to find the potential from infinity to R1. I interpret the problem as asking you to find the potential between R2 and infinity, not R2 and R1. Your answer seems correct though. Just realize that that is the potential difference of S2 from infinity, i.e., it is not V2 - V'1. (In other words, it's V2 - 0.) That's not right. But you can fix it easily enough. What it means to earth shell S2 is the electric field from infinity to R3 must be equal to zero. Given that the sphere S1 has a Q1 charge on it, what does Gauss' law say about what the total charge must be on S2 (where Q2 is the net charge on shell S2)? Edit: Let me rephrase that for clarity. Shell S2 is earthed. That means the electric field for all space outside the shell S2 is zero. • If so, what does Gauss' law say about the total charge of the S1 and S2 combination.? • Since you know that S1 has charge on it, what charge must S2 have on it? Part a). You don't know what the charge is on S1, but you do know it's potential (it's potential is V1). You don't know the value of this charge, but you can still give it a name for now. Let's just call it q for now (or use whichever name you prefer). Set up the charge distribution on shell S2 in terms of the variable q. Now breakup the integral and solve three parts of the integrals. (don't forget about the negative involved $V = - \int_\infty^R \vec E \cdot \vec{d \ell}$.) You know that the sum of all the individual sections must be V1. With that, solve for q. Part b This time, the shell S2 is earthed (i.e., grounded, has a potential of 0 Volts). Essentially that means that this time you only need to integrate from R2 to R1 instead of from all the way from infinity to R1. Last edited: Jul 25, 2014 5. Jul 26, 2014 OK thanks,I'll show my corrections: In question 2) b)V∞-V1'=-(R1R2Edr +R2R3Edr +R3Edr) we will get that V1'=Q1/4∏εR1 -Q1/4∏εR2+Q1/4∏εR3 c)V∞-V2=-(R2R3Edr(0)+R3Edr) then we will get that V2=Q1/4∏εR3. In question 3) a) Electric field outside shell is zero,lets apply gauss law E∫ds=Qin/ε then E4∏r2=Qinc/ε.(r>R3) then Qinc=0 and as Qin=Q1+Q2 then Q2=-Q1.(But i think charges on the outer surface of the shell will be zero and -Q2 whice is -Q1 will surround the inner surface the inner surface of the shell). b)v∞-v1"=-R1Edr thus -v1"=-(R1R2Edr +R2R3Edr (whice is zero since its Ein=0 ) +R3Edr(whice is zero). we will get that V1"=Q1/4∏ε(1/R1-1/R2). 5) a) Let q be the charge of s1, the charge distribution on the conductors will be same as previous parts. V∞-V1=R1R2Edr+R2R3Edr(0)+R3Edr then V1=q/4∏εR1 -q/4∏εR2 +q/4∏εR3 Q1/4∏εR1=q/4∏ε(1/R1-1/R2+1/R3) then Q1=q(1-R1/R2+R2/R3). b) same thing and the answer is Q1=q(1-R1/R2). But when he said in question 1 that "S1 is theerafter isolated" and in question 5 "S1 is no more isolated" what is the real difference?(or in other words what is the relationship between s1 and the shell in the two cases?) In question 4, if we won't add any new conductor to the system,the self capacitances and coefficients of influence C11,C21and C22 stay constant regardless of what changed in the previous parts(like in 3) s2 is earthed but in 2) its not).Is this right? Last edited: Jul 26, 2014 6. Jul 26, 2014 ### collinsmark Yes, I think that's correct. Yes, I think that's correct again. Yes, very nice. Yes, that looks right. Yes. But now your ready to submit the final answer, change q to Q'1, as indicated in the problem statement (they're the same thing here). That looks right. Similarly though, change q to Q''1 before submitting your final answer. I think what the problem statement means when it says "...and thereafter isolated," really means "...and thereafter isolated, until later, when I tell you something new or different has happened." By the way, when a conductor is not isolated, it means that net charge can flow into and out of it. It does not necessarily mean that net charge will flow into or out of it, it just means that it can. When a conductor is isolated, any net charge it has (if any) is trapped there, and stays. Yes, I believe so. Capacitance should not change merely by changing charge. So you're right. On the other hand, in order to calculate the self capacitances, you may wish to add charges so you can calculate the potentials and then solve the capacitance in question by invoking C = Q/V. (And you do this differently depending on which, individual self capacitance or relative capacitance you are trying to calculate). 7. Jul 27, 2014
2017-12-18 01:39:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6325818300247192, "perplexity": 979.397884293575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948599549.81/warc/CC-MAIN-20171218005540-20171218031540-00103.warc.gz"}
https://taskvio.com/bots/url-encoder-decoder/
# What are the uses of a URL encoder decoder? The specification of URL RBG 1738 states that only small groups of characters are allowed to encode and decode such as. A to Z all character are converted in to (- hyphen and dash) A to z all small character is converted in to ( _ underscore) 0 to 9 all numbers are converted in ( . period) +Plus sign is converted into ( * Asterisk or star ) $Dollars are converted into (! Ascalametric mark) ( Open bracket are converted in to ( ‘ single codes' ) the close bracket is converted in to ( ) ## Let's understand how URL encoder work URL encoders already have their specific simple sets to encode any URL and any kind of symbol. URI encoder is a general name of the encoder but some of the other names are available to URI Uniform Resource Identifier, URN Uniform Resource Name, URL Uniform Resource Locator. Let's see all the numbers and symbols get changed by their own separate specific defined character.$ Dollars sign become %24 + plus become %2B , become %2c & becomes %26 etc.… ### How to use URL converter tool The URL Encoder-Decoder tool is really nice and very easy to use. You can just copy-paste it here or you can just simply upload it from your computer if you have already saved it or you can also upload it from dropbox and your drive folder. All options are available. If you really want to use this tool you have to go to taskvio.com home page and there you will find the URL ENCODER DECODER CONVERTER. First,  Open the URL ENCODER AND DECODER Tool and then write your URL in a text box or you can simply copy-paste it and you can up0load it from your computer if you have already saved it. Second, After you select your file or you text it into the text box then you can just simply click on the Encode and decode button. Third, You will get the Encoded code and decoded results from here. It's really easy to use and really useful and the best part is you can encode unnecessary links from here and you will find out if it is a spam link or what. Suppose someone on your email id sends you a mail because usually, we get a lot of emails from the subscriptions to link rights from some many websites so there are tons of emails are really spam but how will you know if it is a Spams link or useful like and also from where you have got the email.
2023-01-31 08:06:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18302041292190552, "perplexity": 1496.5176737389675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499845.10/warc/CC-MAIN-20230131055533-20230131085533-00024.warc.gz"}
https://stats.stackexchange.com/questions/505393/do-i-need-to-use-the-complex-conjugate-when-convolving-two-functions-with-the-ff
# Do I need to use the complex conjugate when convolving two functions with the FFT? I know that, due to the convolution theorem, two densities $$f$$ and $$g$$ can be convolved by (i) applying the FFT to both of them, (ii) multiplying the results, (iii) applying an inverse FFT. Since I need to convolve some functions I thought it best to look up some example code to see the details of how to implement this so I can do something similar. In particular, since kernel density estimation involves convolution I expected this FFT convolution procedure is probably used in the R density() function. However, upon looking at the source code for this, instead of $$f*g = \mathcal{F}^{-1}(\mathcal{F}[f]\cdot \mathcal{F}[g]),$$ the following is used (line 159) $$f*g = \mathcal{F}^{-1}(\mathcal{F}[f]\cdot \mathcal{F}[\overline{g}]),$$ where $$g$$ is the kernel function. I don't understand why the complex conjugate is being taken here since the complex conjugate doesn't appear in the convolution theorem? If I want to convolve two functions using the FFT do I need to take the complex conjugate or not?
2021-04-12 21:24:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8658685684204102, "perplexity": 225.6402068124552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038069267.22/warc/CC-MAIN-20210412210312-20210413000312-00232.warc.gz"}