url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://nodus.ligo.caltech.edu:8081/40m/page61?sort=Type&attach=1
40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop 40m Log, Page 61 of 344 Not logged in ID Date Author Type Category Subject 10780   Thu Dec 11 12:50:12 2014 manasaSummaryGeneralPSL table optical layout I assembled the telescope to couple PSL light into the fiber. The maximum coupling that I could obtain was 10mW out of 65mW (~15%). I was expecting to achieve 80-90% coupling from my design estimates. It makes me wonder if the beam waist measurements made by Harry during summer were correct in the first place. I would like to go back and check the beam waist at the PSL table. Also, we need a pair of 8m (~25 feet) long SMA cables to carry the RF signal from the beat PD on the PSL table to frequency counter module on the IOO rack. Steve says that we had a spool of SMA cable and it was borrowed by someone a few months ago. Any updates on either who is holding it or if it has been used up already would help. The X end slow computer was down this morning. So I used only the Y arm ALS to record the noise level for reference. DTT data for ALSY out of loop noise before opening PSL enclosure is saved in /users/manasa/data/141211/ALSYoutLoop.xml 10787   Thu Dec 11 23:34:06 2014 manasaSummaryGeneralPSL table optical layout Quote: I assembled the telescope to couple PSL light into the fiber. The maximum coupling that I could obtain was 10mW out of 65mW (~15%). I was expecting to achieve 80-90% coupling from my design estimates. It makes me wonder if the beam waist measurements made by Harry during summer were correct in the first place. I would like to go back and check the beam waist at the PSL table. Also, we need a pair of 8m (~25 feet) long SMA cables to carry the RF signal from the beat PD on the PSL table to frequency counter module on the IOO rack. Steve says that we had a spool of SMA cable and it was borrowed by someone a few months ago. Any updates on either who is holding it or if it has been used up already would help. The X end slow computer was down this morning. So I used only the Y arm ALS to record the noise level for reference. DTT data for ALSY out of loop noise before opening PSL enclosure is saved in /users/manasa/data/141211/ALSYoutLoop.xml I missed to elog this earlier. I have temporarily removed the DC photodiode for GTRY to install the fiber holder on the PSL table. So GTRY will not be seeing anything right now. 10788   Fri Dec 12 02:30:25 2014 JenneSummaryGeneralPSL table optical layout Quote: I missed to elog this earlier. I have temporarily removed the DC photodiode for GTRY to install the fiber holder on the PSL table. So GTRY will not be seeing anything right now. After some confusion, I discovered this a few hours ago. 10791   Fri Dec 12 14:38:39 2014 manasaSummaryGeneralFrequency Offset Locking - To Do List (Revised) Unfortunately the order placed for beam samplers last week did not go through. These will be used at the X and Y end tables to dump the unwanted light appropriately. Since they will not be here until Tuesday, I revised the timeline for FOL related activities accordingly. Attachment 1: FOLtodolist.pdf 10792   Fri Dec 12 15:19:04 2014 manasaSummaryGeneralDec 12 - PSL table Quote: Unfortunately the order placed for beam samplers last week did not go through. These will be used at the X and Y end tables to dump the unwanted light appropriately. Since they will not be here until Tuesday, I revised the timeline for FOL related activities accordingly. I was working on the PSL table today. Since the rejected 1064nm light after the SHG crystal is not easily reachable to measure beam widths close to the waist, I put a lens f=300mm and measured the beam size around its focus. I used this data and redesigned the telescope using 'a la mode'. I used a beam splitter to attenuate the beam directed towards the fiber. The reflected beam from BS has been dumped (I need to find a better beam dump than what is being used right now. I have only ~200uW at the input of the fiber coupler after the BS and 86uW at the output of the fiber (43% coupling) I moved the GTRY DC photodiode and the lens in front of it to make space for the fiber coupler mount. The layout on the PSL table right now is as shown below. I have also put the fiber chassis inside the PSL enclosure on the rack. I moved the coherent spectrum analyser controller that is not being used to make space on the rack. Attachment 2: PSLfiberChassis.png 10800   Mon Dec 15 22:40:09 2014 ranaSummaryPSLPMC restored Found that the PMC gain has been set to 5.3 dB instead of 10 dB since 9 AM this morning, with no elog entry. I also re-aligned the beam into the PMC to minimize the reflection. It was almost all in pitch. 10808   Wed Dec 17 11:57:56 2014 manasaSummaryGeneralY arm optical layout I was working around the PSL table and Y endtable today. I modified the Y arm optical layout that couples the 1064nm light leaking from the SHG crystal into the fiber for frequency offset locking. The ND filter that was used to attenuate the power coupled into the fiber has been replaced with a beam sampler (Thor labs BSF-10C). The reflected power after this optic is ~1.3mW and the trasmitted power has been dumped to a razor blade beam dump (~210mW). Since we have a spare fiber running from the Y end  to the PSL table, I installed an FC/APC fiber connector on the PSL table to connect them and monitored the output power at the Y end itself. After setting up, I have ~620uW of Y arm light on the PSL table (~48% coupling). During the course of the alignment, I lowered the power of the Y end NPRO and disengaged the ETMY oplev. These were reset after I closed the end table. Attached is the out of loop noise measurement of the Y arm ALS error signal before (ref plots) and after. Attachment 1: 58.png 10829   Mon Dec 22 15:46:58 2014 KurosawaSummaryIOOSeven transfer functions IMC OL TF has been measured from 10K to 10M Attachment 1: MC_OLTF.pdf 10849   Tue Dec 30 20:35:59 2014 ranaSummaryPSLPMC Tune Up 1. Calibrated the Phase Adjust slider for the PMC RF Modulation; did this by putting the LO and RF Mod out on the TDS 3034 oscope and triggering on the LO. This scope has a differential phase measurement feature for periodic signals. 2. Calibrated the RF Amp Adj slider for the PMC RF Modulation (on the phase shifter screen) 3. The PMC 35.5 MHz Frequency reference card is now in our 40m DCC Tree. 4.  The LO and RF signals both look fairly sinusoidal ! 5. Took photos of our Osc board - they are on the DCC page. Our board is D980353-B-C, but there are no such modern version in any DCC. 6. The PMC board's Mixer Out shows a few mV of RF at multiples of the 35.5 MHz mod freq. This comes in via the LO, and can't be gotten rid of by using a BALUN or BP filters. 7. In installed the LARK 35.5 MHz BP filter that Valera sent us awhile ago (Steve has the datasheet to scan and upload to this entry). It is narrow and has a 2 dB insertion loss. For tuning the phase and amplitude of the mod. drive: - since we don't have access to both RF phases, I just maximized the gain using the RF phase slider. First, I flipped the sign using the 'phase flip' button so that we would be near the linear range of the slider. Then I put the servo close to oscillation and adjusted the phase to maximize the height of the ~13 kHz body mode. For the amplitude, I just cranked the modulation depth until it started to show up as a reduction in the transmission by ~0.2%, then reduced it by a factor of ~3. That makes it ~5x larger than before. Attachment 1: 17.png Attachment 2: PMCcal.ipynb.xz Attachment 3: PMC_Osc_Cal.pdf 10879   Thu Jan 8 19:02:42 2015 JaxSummaryElectronicsMC demod modifications Here's a summary of the changes made to the D990511 serial 115 (formerly known as REFL 33), as well as a short procedure. It needed tuning to 29.5MHz and also had some other issues that we found along the way. So here's a picture of it as built: 1. U11 and U12 changed from 5MHz LP to 10 MHz LP filters. 2. Resistors R8 and R9 moved from their PCB locations to between pins 1 (signal) and 3 (ground) of U11 and U12, respectively. These were put in the wrong place for proper termination so it made sense to shift them while I was already replacing the filters. Also, please note- whoever labeled the voltages on this board needed an extra cup of coffee that day. There are two separate 15V power supplies, one converted from 24V, one directly supplied. The directly supplied one is labeled 15A. This does NOT mean 15 AMPS. Transfer functions: Equipment: 4395A, Signal generator (29.5 MHz), two splitters, one mixer You can't take the TF from PD in to I/Q out directly. Since this is a demod board, there's a demodulating (downconverting) mixer in the I and Q PD in paths. Negligible signal will get through without some signal applied to the L input of the mixer. In theory, this signal could be at DC, but there are blocking capacitors in the LO in paths. Therefore, you have to upconvert the signal you're using to probe the board's behavior before it hits the board.  Using the 4395A as a network analyzer, split the RF out. RFout1 goes to input R, RFout2 goes to the IF port of the mixer. Split the signal generator (SG). SG1 goes to LO in, SG2 goes to the L port of the mixer. The RF port of the mixer (your upconverted RFout2) goes to PD in, and the I/Q out goes back to the A/B port of the 4395A - at the same frequency as the input, thanks to the board's internal downconversion. Phase measurement: Equipment: Signal generator (29.5 MHz), signal generator (29.501 MHz), oscilloscope Much simpler: 29.5 MHz to the LO input (0 dBm), 29.501 MHz to the PD input (0 dBm), compare the phases of the I/Q outputs on the oscilloscope. There are four variable capacitors in the circuit that are not on the DCC revision of the board - C28-31. On the LO path, C28 tunes the I phase, C30 tunes the Q phase. On the PD path, C29 and 31 appear to be purely decorative - both are in parallel with each other on the PD in Q path, I'm guessing C29 was supposed to be on the PD in I path. Fortunately, C28 and C30 had enough dynamic range to tune the I/Q phase difference to 90 degrees. Before tuning: After tuning: 10899   Wed Jan 14 02:11:07 2015 ranaSummaryTreasure2-loop Algebra Loopology ## I show here the matrix formalism to calculate analytically the loop TF relationships for the IMC w/ both FSS actuators so that it would be easier to interperet the results. The attached PDF shows the Mathematica notebook and the associated block diagram. In the notebook, I have written the single hop connection gains into the K matrix. P is the optical plant, C is the Common electronic gain, F is the 'fast' NPRO PZT path, and M is the phase Modulator. G is the closed loop gain matrix. The notation is similar to matlab SS systems; the first index is the row and the second index is the column. If you want to find the TF from node 2 to node 3, you would ask for G[[3,2]]. As examples, I've shown how to get the FAST gain TF that I recently made with the Koji filter box as well as the usual OLG measurement that we make from the MC servo board front panel. Attachment 1: FSSloop.pdf Attachment 2: FSSloop.png 10952   Wed Jan 28 23:53:24 2015 KojiSummaryASCXarm ASS fix X-Arm ASS was fixed. ASS_DITHER_ON.snap was updated so that the new setting can be loaded from the ASS screen. The input and output matrices and the servo gains were adjusted as found in the attached image. The output matrix was adjusted by looking at the static response of the error signals when a DC offset was applied to each actuator. The servo was tested with misalignment of the ITM, ETM, and BS. In fact, the servo restored transmission from 0.15 to 1. The resulting contrast after ASSing was ~99% level. (I forgot to record the measurement but the dark fringe level of ASDC was 4~5count.) Attachment 1: 12.png 10974   Wed Feb 4 18:27:55 2015 KojiSummaryASCXarm ASS fix Please remember that Xarm ASS needs FM6 (Bounce filters) to be ON in order to work properly. 10986   Sat Feb 7 13:34:11 2015 KojiSummaryPSLISS AOM driver check I wanted to check the status of the ISS. The AOM driver response was measured on Friday night. The beam path has not been disturbed yet. - I found the AOM crystal was removed from the beam path. It was left so. - The AOM crystal has +24V power supply in stead of specified +28V. I wanted to check the functionality of the AOM driver. - I've inserted a 20dB directional coupler between the driver and the crystal. To do so, I first turned off the power supply by removing the corresponding fuse block at the side panel of the 1X1 Rack. Then ZFDC-20-5-S+ was inserted, the coupled output was connected to a 100MHz oscilloscope with 50Ohm termination. Then plugged in the fuse block again to energize the driver box. Note that the oscilloscope bandwidth caused reduction the amplitude by a factor of 0.78. In the result, this has already been compensated. - First, I checked the applied offset from a signal generator (SG) and the actual voltage at the AOM input. The SG OUT and the AOM control input are supposed to have an impedance of 50Ohm. However, apparently the voltage seen at the AOM in was low. It behaved as if the input impedance of the AOM driver is 25Ohm. In any case, we want to use low output impedance source to drive the AOM driver, but we should keep this in mind. - The first attachment shows the output RF amplitude as a function of the DC offset. The horizontal axis is the DC voltage AT THE AOM INPUT (not at the SG out). Above 0.5V offset some non linearity is seen. I wasn't sure if this is related to the lower supply voltage or not. I'd use the nominal DC of 0.5V@AOM. The output with the input of 1V does not reach the specified output of 2W (33dBm). I didn't touch the RF output adjustment yet. And again the suppy is not +28V but +24V. - I decided to measure the frequency response at the offset of 0.53V@AOM, this corresponds to the DC offset of 0.8V. 0.3Vpp oscillation was given. i.e. The SG out seen by a high-Z scope is V_SG(t) = 1.59 + 0.3 Sin(2 pi f t) [V]. The AOM drive voltage V_AOM(t) = 0.53 + 0.099 Sin(2 pi f t). From the max and min amplitudes observed in the osciiloscope, the response was checked. (Attachment 2) The plot shows how much is the modulation depth (0~1) when the amplitude of 1Vpk is applied at the AOM input. The value is ~2 [1/V] at DC. This makes sense as the control amplitude is 0.5, the applied voltage swings from 0V-1V and yields 100% modulation. At 10MHz the first sign of reduction is seen, then the response starts dropping above 10MHz. The specification says the rise time of the driver is 12nsec. If the system has a single pole, there is a relationship between the rise time (t_rise) and the cut-off freq (fc) as fc*t_rise = 0.35 (cf Wikipedia "Rise Time"). If we beieve this, the specification of fc is 30MHz. That sounds too high compared to the measurement (fc ~15MHz). In any case the response is pretty flat up to 3MHz. Attachment 1: AOM_drive.pdf Attachment 2: AOM_response.pdf 10988   Sun Feb 8 21:54:50 2015 ranaSummaryPSLISS AOM driver check This is good news. It means that the driver probably won't limit the response of the loop - I expect we'll get 20-30 deg of phase lag @ 100 kHz just because of the acoustic response of the AOM PZT + crystal. 11005   Wed Feb 11 18:11:46 2015 KojiSummaryLSC3f modulation cancellation 33MHz sidebands can be elliminated by careful choice of the modulation depths and the relative phase between the modulation signals. If this condition is realized, the REFL33 signals will have even more immunity to the arm cavity signals because the carrier signal will lose its counterpart to produce the signal at 33MHz. Formulation of double phase modulation m1: modulation depth of the f1 modulation m2: modulation depth of the f2 (=5xf1) modulation The electric field of the beam after the EOM $\dpi{120} E=E_0 \exp \left[ {\rm i} \Omega t + m_1 \cos \omega t +m_2 \cos 5 \omega t \right ]$ $\dpi{120} \flushleft = {\it E}_0 e^{{\rm i} \Omega t} \\ \times \left[ J_0(m_1) + J_1(m_1) e^{{\rm i} \omega t}- J_1(m_1) e^{-{\rm i} \omega t} + J_2(m_1) e^{{\rm i} 2\omega t}+ J_2(m_1) e^{-{\rm i} 2\omega t} + J_3(m_1) e^{{\rm i} 3\omega t}- J_3(m_1) e^{-{\rm i} 3\omega t} + \cdots \right] \\ \times \left[ J_0(m_2) + J_1(m_2) e^{{\rm i} 5 \omega t}- J_1(m_2) e^{-{\rm i} 5 \omega t} + \cdots \right]$ $\dpi{120} \flushleft = {\it E}_0 e^{{\rm i} \Omega t} \\ \times \left\{ \cdots + \left[ J_3(m_1) J_0(m_2) + J_2(m_1) J_1(m_2) \right] e^{{\rm i} 3 \omega t} - \left[ J_3(m_1) J_0(m_2) + J_2 (m_1) J_1(m_2) \right] e^{-{\rm i} 3 \omega t} + \cdots \right\}$ Therefore what we want to realize is the following "extinction" condition $\dpi{120} J_3(m_1) J_0(m_2) + J_2(m_1) J_1(m_2) = 0$ We are in the small modulation regime. i.e. J0(m) = 1, J1(m) = m/2, J2(m) = m2/8, J3(m) = m3/48 Therefore we can simplify the above exitinction condition as $\dpi{120} m_1 + 3 m_2 = 0$ m2 < 0 means the start phase of the m2 modulation needs to be 180deg off from the phase of the m1 modulation. $\dpi{120} E = E_0 \exp\left\{ {\rm i} [\Omega t + m_1 \cos \omega t + \frac{m_1}{3} \cos (5 \omega t + \pi)] \right \}$ Field amplitude m1=0.3, m2=-0.1 m1=0.2, m2=0.2 Carrier 0.975 0.980 1st order sidebands 0.148 9.9e-2 2nd 1.1e-3 4.9e-3 3rd 3.5e-7 6.6e-4 4th 7.4e-3 9.9e-3 5th 4.9e-2 9.9e-2 6th 7.4e-3 9.9e-3 7th 5.6e-4 4.9e-4 8th 1.4e-5 4.1e-5 9th 1.9e-4 5.0e-4 10th 1.2e-3 4.9e-3 11th 1.9e-4 5.0e-4 12th 1.4e-5 2.5e-5 13th 4.7e-7 1.7e-6 14th 3.1e-6 1.7e-5 15th 2.0e-5 1.6e-4 11029   Sat Feb 14 19:54:04 2015 KojiSummaryLSC3f modulation cancellation Optical Setup [Attachment 1] Right before the PSL beam goes into the vacuum chamber, it goes through an AR-wedged plate. This AR plate produces two beams. One of them is for the IO beam angle/position monitor. And the other was usually dumped. I decided to use this beam. A G&H mirror reflects the beam towards the edge of the table. A 45deg HR mirror brings this beam to the beat set up at the south side of the table. This beam is S-polarlized as it directly comes from the EOM. [Attachment 2] The beam from the PSL goes through a HWP and some matching lenses before the combining beam splitter (50% 45deg P). The AUX laser beam is attenuated by a HWP and a PBS. The transmitted beam from the PBS is supposed to have P-polarization. The beam alignment is usually done at the PSL beam side. The combined beam is steered by a HR mirror and introduced to Thorlabs PDA10CF. As the PD has small diameter of 0.5mm, the beam needed to be focused by a strong lens. After careful adjustment of the beam mode matching, polarization, and alignment, the beatnote was ~1Vpp for 2.5Vdc. In the end, I reduced the AUX laser power such that the beat amplitude went down to ~0.18Vpp (-11dBm at the PD, -18dBm at the mixer, -27dBm at the spectrum analyzer) in order to minimize nonlinearity of the RF system and in order that the spectrum analyzer didn't need input attenuation. Electrical Setup [Attachment 3] The PD signal is mixed with a local oscillator signal at 95MHz, and then used to lock the PLL loop. The PLL loop allows us to observe the peaks with more integration time, and thus with a better signal-to-noise ratio. The signal from the PD output goes through a DC block, then 6dB attenuator. This attenuator is added to damp reflection and distortion between the PD and the mixer. When the PLL is locked, the dominant signal is the one at 95MHz. Without this attenuator, this strong 95MHz signal cause harmonic distortions like 190MHz. As a result, it causes series of spurious peaks at 190MHz +/- n* 11MHz. 10dB coupler is used to peep the PD signal without much disturbing the main line. Considering we have 6dB attanuator, we can use this coupler output for the PLL and can use the main line for the RF monitor, next time. The mixer takes the PD signal and the LO signal from Marconi. Marconi is set to have +7dBm output at 95MHz. FOr the image rejection, SLP1.9 was used. The minicirsuit filters have high-Z at the stop band, we need a 50Ohm temrinator between the mixer and the LPF. The error signal from the LPF is fed to SR560 (G=+500, 1Hz 1st-order LPF). I still don't understand why I had to use a LPF for the locking. As the NPRO PZT is a frequency actuator, and the PLL is sensitive to the phase, we are supposed to use a flat response for PLL locking. But it didn't work. Once we check the open loop TF of the system, it will become obvious (but I didn't). The actuation signal is fed to the fast PZT input of the AUX NPRO laser. Attachment 1: beat_setup1.JPG Attachment 2: beat_setup2.JPG Attachment 3: electrical_setup.pdf 11031   Sat Feb 14 20:37:51 2015 KojiSummaryLSC3f modulation cancellation Experimental results - PD response [Attachment 1] The AUX laser temperature was swept along with the note by Annalisa [http://nodus.ligo.caltech.edu:8080/40m/8369] It is easier to observe the beat note by closing the PSL shutter as the MC locking yields more fluctuation of the PSL laser freuqency at low frequency. Once I got the beat note and maximized it, I immediately noticed that the PD response is not flat. For the next trial, we should use Newfocus 1611. For the measurement today, I decided to characterize the response by sweeping the beat frequency and use the MAXHOLD function of the spectrum analyzer. The measured and modelled response of the PD are shown in the attachment 1. It has non-intuitive shape. Therefore the response is first modelled by two complex pole pair at 127.5MHz with Q of 1, and then the residual was empirically fitted with 29th polynomial of f. - Modulation profile of the nominal setting [Attachment 2] Now the spectrum of the PD output was measured. This is a stiched data of the spectrum between 1~101MHz and 99~199MHz that was almost simultaneously measured (i.e. Display 1 and Display 2). The IF bandwidth was 1kHz. The PD response correction described above was applied. It obviously had the peaks associated with our main modulations. In addition, there are more peaks seen. The attachment 2 breaks down what is causing the peaks. • Carrier: The PLL LO frequency is 95MHz. Therefore the carrier is locked at 95MHz. • Modulation sidebands (11/55MHz series): Series of sidebands are seen at the both side of the carrier. Their frequency is 95MHz +/- n * fmod  (fmod = 11.066128MHz). Note that the sidebands for n>10 were above 200MHz, and n<-9 (indicated in gray) were folded at 0Hz. With this measurement BW, the following sidebands were buried in the noise floor. n = -8, -12, -13, and -14, n<= -16, and n>=+7 • Modulation sidebands for IMC and PMC (29.5MHz and 35.5MHz): First order sidebands for the IMC and PMC modulations of sidebands are seen at the both side of the carrier. Their frequency is 95MHz +/- 29.5MHz or 33.5MHz. The PMC modulation sidebands are supposed to be blocked by the PMC. However, due to finite finesse of the PMC, small fraction of the PMC sidebands are transmitted. In deed, it is comparable to the modulation depth of the IMC one. • RF AM or RF EMI for the main modulation and the IMC modulationand: If there is residual RF AM in the PSL beam associated with the IMC and main modulations, it appears as the peaks at the modulation frequency and its harmonics. Also EM radiation couples into this measument RF system also appears at these frequencies. They are seen at n * fmod  (n=1,2,4,5) and 29.5MHz. • Reflection/distortion or leakage from mixer IF to RF: The IF port of the mixer naturally has 190MHz signal when the PLL is locked. If the isolation from the IF port to the RF port is not enough, this signal can appear in the RF monitor signal via an imperfection of the coupler or a reflection from the PD. Also, if the reflecrtion/distortion exist between the PD and the mixer RF input, it also cause the signal around 190MHz. It is seen at 190MHz +/- n* fmod. In the plot, the peak at n=0, -1 are visible. In fact these peak were secondarily dominant in the spectrum when there was no 6dB attenuation in the PD line. WIth the attenuator, they are well damped and don't disturb the main measurment. From the measured peak height, we are able to estimate the modulation depths for 11MHz, 55MHz, IMC modulations, as well as the relative phase of the 11MHz and 55MHz modulation. (It is not yet done). - 3f modulation reduction [Attachment 3] Now, the redcution of the 3f modulation was tried. The measured modulation levels for the 11MHz and 55MHz were almost the same. The calculation predicts that the modulation for the 55MHz needs to be 1/3 of the 11MHz one. Therefore the attenuation of 9dB and 10dB of the modulation attenuation knob at the frequency generation box were tried. To give the variable delay time in the 55MHz line, EG&G ORTEC delay line unit was used. This allows us to change the delay time from 0ns to 63.5ns with the resolution of 0.5ns. The frequency of 55MHz yields a phase sensitivity of ~20deg/ns (360deg/18ns). Therefore we can adjust the phase with the precision of 10deg over 1275deg. The 3rd-order peak at 61.8MHz was observed with measurement span of 1kHz with very narrow BW like 30Hz(? not so sure). The delay time was swept while measuring the peak height each time. For both the atteuation, the peak height clearly showed the repeatitive dependence with the period of 18ns, and the 10dB case gave the better result. The difference between the best (1.24e-7 Vpk) and the worst (2.63e-6 Vpk) was more than a factor of 20. The 3rd-order peak in the above broadband spectrum measurement was 6.38e-6 Vpk. Considering the attenuation of the 55MHz modulation by 10dB, we were at the exact unluck phase difference. The improvement expected from the 3f reduction (in the 33MHz signal) will be about 50, assuming there is no other coupling mechanism from CARM to REFL33. I decided to declare the best setting is "10dB attenuation & 28ns delay". - Resulting modulation profile [Attachment 4] As a confirmation, the modulation profie was measured as done before the adjustment. It is clear that the 3rd-order modulation was buried in the floor noise. 10dB attenuation of the 55MHz modulation yields corresponding reduction of the sidebands. This will impact the signal quality for the 55MHz series error signals, particularly 165MHz ones. We should consider to install the Teledyne Cougar amplifier next to the EOM so that we can increase the over all modulation depth. Attachment 1: beat_pd_response.pdf Attachment 2: beat_nominal.pdf Attachment 3: 3f_reduction.pdf Attachment 4: beat_3f_reduced.pdf 11032   Sat Feb 14 22:14:02 2015 KojiSummaryLSC[HOW TO] 3f modulation cancellation When I finished my measurements, the modulation setup was reverted to the conventional one. If someone wants to use the 3f cancellation setting, it can be done along with this HOW-TO. The 3f cancellation can be realized by adding a carefully adjusted delay line and attenuation for the 55MHz modulation on the frequency generation box at the 1X2 rack.  Here is the procedure: 1) Turn off the frequency generation box There is a toggle switch at the rear of the unit. It's better to turn it off before any cable action. The outputs of the frequency generation box are high in general. We don't want to operate the amplifiers without proper impedance matching in any occasion. 2) Remove the small SMA cable between 55MHz out and 55MHz in (Left arrow in the attachment 1). According to the photo by Alberto (svn: /docs/upgrade08/RFsystem/frequencyGenerationBox/photos/DSC_2410.JPG), this 55MHz out is the output of the frequency multiplier. The 55MHz in is the input for the amplifier stages. Therefore, the cable length between these two connectors changes the relative phase between the modulations at 11MHz and 55MHz. 3) Add a delay line box with cables (Attachment 2). Connect the cables from the delay line box to the 55MHz in/out connectors. I used 1.5m BNC cables. The delay line box was set to have 28ns delay. 4) Set the attenuation of the 55MHz EOM drive (Right arrow in the attachment 1) to be 10dB. Rotate the attenuation for 55MHz EOM from 0dB nominal to 10dB. 5) Turn on the frequency modulation box For reference, the 3rd attachment shows the characteristics of the delay line cable/box combo when the 3f modualtion reduction was realized. It had 1.37dB attenuation and +124deg phase shift. This phase change corresponds to the time delay of 48ns. Note that the response of a short cable used for the measurement has been calibrated out using the CAL function of the network analyzer. Attachment 1: freq_gen_box.JPG Attachment 2: delay_line.JPG Attachment 3: cable_spec.pdf 11033   Sun Feb 15 16:20:44 2015 KojiSummaryLSC[ELOG LIST] 3f modulation cancellation ## Summary of the ELOGS 3f modulation cancellation theory http://nodus.ligo.caltech.edu:8080/40m/11005 3f modulation cancellation adjustment setup http://nodus.ligo.caltech.edu:8080/40m/11029 Receipe for the 3f modulation cancellation http://nodus.ligo.caltech.edu:8080/40m/11032 Modulation depth analysis http://nodus.ligo.caltech.edu:8080/40m/11036 11034   Sun Feb 15 20:55:48 2015 ranaSummaryLSC[ELOG LIST] 3f modulation cancellation I wonder if DRMI can be locked on 3f using this lower 55 MHz modulation depth. It seems that PRMI should be unaffected, but that the 3*f2 signals for SRCL will be too puny. Is it really possible to scale up the overall modulation depths by 3x to compensate for this? 11035   Mon Feb 16 00:08:44 2015 KojiSummaryLSC[ELOG LIST] 3f modulation cancellation This KTP crystal has the maximum allowed RF power of 10W (=32Vpk) and V_pi = 230V. This corresponds to the maximum allowed modulation depth of 32*Pi/230 = 0.44. So we probably can achieve gamma_1 of ~0.4 and gamma_2 of ~0.13. That's not x3 but x2, Then Kiwamu's triple resonant circuit LIGO-G1000297-v1 actually shows the modulation up to ~0.7. Therefore it is purely an issue how to deliver sufficient modulation power. (In fact his measurement shows some nonlinearity above the modulation depth of ~0.4 so we should keep the maximum power consumption of 10W at the crystal) This means that we need to review our RF system (again!) - Review infamous crazy attn/amp combinations in the frequency generation box. - Use Teledyne Cougar ampilfier (A2CP2596) right before the triple resonant box. This should be installed closely to the triple resonant box in order to minimize the effects of the reflection due to imperferct impedance matching. - Review and refine the triple resonant circuit - it's not built on a PCB but on a universal board. I think that we don't need triple resonance, but double is OK as the 29.5MHz signal is small. We want +28V supply at 1X1 for the Teledyne amp and the AOM driver. Do we have any unused Sorensen? 11036   Mon Feb 16 01:45:12 2015 KojiSummaryLSCmodulation depth analysis Based on the measured modulation profiles, the depth of each modulation was estimated. Least square sum minimization of the relative error was used for the cost function. -8th, -12th~-14th, n=>7th are not included in the estimation for the nominal case. -7th~-9th, -11th~-15th, n=>7th are not included in the estimation for the 3f reduced case. ## Nominal modulation m_f1 = 0.194 m_f2 = 0.234 theta_f1f2 = 41.35deg m_IMC = 0.00153 ## 3f reduced modulation m_f1 = 0.191 m_f2 = 0.0579 theta_f1f2 = 180deg m_IMC = 0.00149 (Sorry! There is no error bars. The data have too few statistics...) Attachment 1: modulation_nominal.pdf Attachment 2: modulation_3f_reduced.pdf 11106   Fri Mar 6 00:59:13 2015 ranaSummaryIOOMC alignment not drifting; PSL beam is drifting In the attached plot you can see that the MC REFL fluctuations started getting larger on Feb 24 just after midnight. Its been bad ever since. What happened that night or the afternoon of Feb 23? The WFS DC spot positions were far off (~0.9), so I unlocked the IMC and aligned the spots on there using the nearby steering mirrors - lets see if this helps. Also, these mounts should be improved. Steve, can you please prepare 5 mounts with the Thorlabs BA2 or BA3 base, the 3/4" diameter steel posts, and the Polanski steel mirror mounts? We should replace the mirror mounts for the 1" diameter mirrors during the daytime next week to reduce drift. Attachment 1: MCdrfit.png 11109   Fri Mar 6 13:48:17 2015 dark kiwamuSummaryIOOtriple resonance circuit I was asked by Koji to point out where a schematic of the triple resonant circuit is. It seems that I had posted a schematic of what currently is installed (see elog 4562 from almost 4 yrs ago!). (Some transfomer story) Then I immediately noticed that it did not show two components which were wideband RF transformers. In order to get an effective turns ratio of 1:9.8 (as indicated in the schematic) from a CoilCrfat's transformer kit in the electronics table, I had put two more transformers in series to a PWB1040L which is shown in the schematic. If I am not mistaken, this PWB1040L must be followed by a PWB1015L and PWB-16-AL in the order from the input side to the EOM side. This gives an impedance ratio of 96 or an effective turns ratio of sqrt(96) = 9.8. Also, if one wants to review and/or upgrade the circuit, this document may be helpful: https://wiki-40m.ligo.caltech.edu/Electronics/Multi_Resonant_EOM?action=AttachFile&do=get&target=design_EOM.pdf This is a document that I wrote some time ago describing how I wanted to make the circuit better. Apparently I did not get a chance to do it. 11112   Fri Mar 6 19:54:15 2015 ranaSummaryIOOMC alignment not drifting; PSL beam is drifting MC Refl alignment follow up: the alignment from last night seems still good today. We should keep an  on the MC WFS DC spots and not let them get beyond 0.5. Attachment 1: Untitled.png 11134   Wed Mar 11 19:15:03 2015 KojiSummaryLSCROUGH calibration of the darm spectrum during the full PRFPMI lock I made very rough calibration of the DARM spectra before and after the transition for the second lock on Mar 8. The cavity pole (expected to be 4.3kHz) was not compensated. Also the servo bump was not compensated. [Error calibration] While the DARM/CARM were controlled with ALS, the calibration of them are provided by the ALS phase tracker calibration. i.e 1 degree = 19.23kHz This means that the calibration factor is DARM [deg] * 19.23e3 [Hz/deg] / c [m Hz] * lambda [m] * L_arm [m] = DARM* 19.23e3/299792458*1064e-9*38.5 = 2.6e-9 *DARM [m] [Feedback calibration] Then, the feedback signal was calibrated by the suspension response (f=1Hz, Q=5) so that the error and feedback signals can match at 100Hz. This gave me the DC factor of 5e-8. The spectra at 1109832200 (ALS only, even not on the resonance) and 1109832500 (after DARM/CARM transitions) were taken. Jenne said that the whitening filters for AS55Q was not on. Attachment 1: 150308_DARM_ROUGH_CALIB.pdf 11147   Thu Mar 19 16:58:19 2015 SteveSummaryIOOMC alignment not drifting; PSL beam is drifting Polaris mounts ordered. Quote: In the attached plot you can see that the MC REFL fluctuations started getting larger on Feb 24 just after midnight. Its been bad ever since. What happened that night or the afternoon of Feb 23? The WFS DC spot positions were far off (~0.9), so I unlocked the IMC and aligned the spots on there using the nearby steering mirrors - lets see if this helps. Also, these mounts should be improved. Steve, can you please prepare 5 mounts with the Thorlabs BA2 or BA3 base, the 3/4" diameter steel posts, and the Polanski steel mirror mounts? We should replace the mirror mounts for the 1" diameter mirrors during the daytime next week to reduce drift. Attachment 1: driftingInputBeam2.jpg 11148   Thu Mar 19 17:11:32 2015 steveSummarySUSoplev laser summary updated March  19, 2015   2  new  JDSU 1103P, sn P919645 & P919639 received from Thailand through Edmond Optics. Mfg date 12/2014............as spares 11149   Fri Mar 20 10:51:09 2015 SteveSummaryIOOMC alignment not drifting; PSL beam is drifting Are the two  visible small srews holding the adapter plate only? If yes, it is the weakest point of the IOO path. Attachment 1: eom4.jpg Attachment 2: eom3.jpg 11158   Mon Mar 23 09:42:29 2015 SteveSummaryIOO4" PSL beam path posts To achive the same beam height each components needs their specific post height. Beam Height Base Plate Mirror Mount Lens Mount Waveplates-Rotary 0.75" OD. SS Post Height 4" Thorlabs BA2 Newport LH-1 2.620" 4" Thorlabs BA2 Polaris K1 2.620" 4" Thorlabs BA2 Polaris K2 2.220" 4" Thorlabs BA2 Thorlabs LMR1 2.750" 4" Thorlabs BA2 New Focus 9401 2.120" 4" Thorlabs BA2 Newport U100 2.620" 4" Thorlabs BA2 Newport U200 2.120" 4" Newport 9021 LH-1 2.0" PMC-MM lens with xy translation stage: Newport 9022, 9065A    Atm3 4" Newport 9021 LH-1 1.89 MC-MM lens with translation stage: Newport 9022, 9025        Atm2 We have 2.625" tall, 3/4" OD SS posts for Polaris K1 mirror mounts: 20 pieces Ordered Newport LH-1 lens mounts with axis height 1.0 Attachment 1: .75odSSpost.pdf Attachment 2: MC_mml_trans_clamp.jpg Attachment 3: PMCmmLn.jpg 11171   Wed Mar 25 18:27:34 2015 KojiSummaryGeneralSome maintainance - I found that the cable for the AS55 LO signal had the shielding 90% broken. It was fixed. - The Mon5 monitor in the control room was not functional for months. I found a small CRT down the east arm. It is now set as MON5 showing the picture from cameras. Steve, do we need any safety measure for this CRT? 11173   Wed Mar 25 18:48:11 2015 KojiSummaryLSC55MHz demodulators inspection [Koji Den EricG] We inspected the {REFL, AS, POP}55 demodulators. Short in short, we did the following changes: - The REFL55 PD RF signal is connected to the POP55 demodulator now. Thus, the POP55 signals should be used at the input matrix of the LSC screens for PRMI tests. - The POP55 PD RF signal is connected to the REFL55 demodulator now. - We jiggled the whitening gains and the whitening triggers. Whitening gains for the AS, REFL, POP PDs are set to be 9, 21, 30dB as before. However, the signal gain may be changed. The optimal gains should be checked through the locking with the interferometer. - Test 1 Inject 55.3MHz signal to the demodulators. Check the amplitude in the demodulated signal with DTT. The peak height in the spectrum was calibrated to counts (i.e. it is not counts/rtHz) We check the amplitude at the input of the input filters (e.g. C1:LSC-REFL55_I_IN1). The whitening gains are set to 0dB. And the whitening filters were turned off. REFL55 f_inj = 55.32961MHz -10dBm REFL55I @999Hz  22.14 [cnt] REFL55Q @999Hz  26.21 [cnt] f_inj = 55.33051MHz -10dBm REFL55I @ 99Hz  20.26 [cnt]  ~200mVpk at the analog I monitor REFL55Q @ 99Hz  24.03 [cnt] f_inj = 55.33060MHz -10dBm REFL55I @8.5Hz  22.14 [cnt] REFL55Q @8.5Hz  26.21 [cnt] ---- f_inj = 55.33051MHz -10dBm AS55I   @ 99Hz 585.4 [cnt] AS55Q   @ 99Hz 590.5 [cnt]   ~600mVpk at the analog Q monitor f_inj = 55.33051MHz -10dBm POP55I  @ 99Hz 613.9 [cnt]   ~600mVpk at the analog I monitor POP55Q  @ 99Hz 602.2 [cnt] We wondered why the REFL55 has such a small response. The other demodulators seems to have some daughter board. (Sigg amp?) This maybe causing this difference. ----- - Test 2 We injected 1kHz 1Vpk AF signal into whitening board. The peak height at 1kHz was measured. The whitening filters/gains were set to be the same condition above. f_inj = 1kHz 1Vpk REFL55I 2403 cnt REFL55Q 2374 cnt AS55I   2374 cnt AS55Q   2396 cnt POP55I  2365 cnt POP55Q  2350 cnt So, they look identical. => The difference between REFL55 and others are in the demodulator. 11180   Fri Mar 27 20:32:17 2015 KojiSummaryLSCLocking activity - Adjutsed the IMC WFS operating point. The IMC refl is 0.42-0.43. - The arms are aligned with ASS - The X arm green was aligned with ASX. PZT offsets slides were adjusted to offload the servo outputs. - I tried the locking once and the transition was successfull. I even tried the 3f-1f transition but the lock was lost. I wasn't sure what was the real cause. I need to go now. I leave the IFO at the state that it is waiting for the arms locked with IR for the full locking trial. 11235   Wed Apr 22 11:48:30 2015 manasaSummaryGeneralDelay line frequency discriminator for FOL error signal Since the Frequency counters have not been a reliable error signal for FOL PID loop, we will put together an analog delay line frequency dicriminator as an alternative method to obtain the beat frequency. The configuration will be similar to what was done in elog 4254 in the first place. For a delay line frequency dicriminator, the output at the mixer is proportional to $cos(\theta_{b})$ where $\theta_{b} = 2 \pi f_{b}L/v$ L - cable length asymmetry, fb - beat frequency and v - velocity of light in the cable. The linear output signal canbe obtained for  $0< \theta_{b}<\pi$ For our purpose in FOL, if we would like to measure beat frequency over a bandwidth of 200MHz, this would correspond to a cable length difference of 0.5 m (assuming the speed of light in the coaxial cable is ~ 2x108m/s. 11236   Wed Apr 22 14:56:18 2015 manasaSummaryGeneralDelay line frequency discriminator for FOL error signal [Koji, Manasa] Since the bandwidth of the fiber PD is ~ 1GHz, we could design the frequency discriminator to have a wider bandwidth (~ 500MHz). The output from the frequency discriminator could then be used to define the range setting of the frequency counter for readout or may be even error signal to the PID loop. A test run for the analog DFD with cable length difference of 27cm gave a linear output signal with zero-crossing at ~206MHz. Detailed schematic of the setup and plot (voltage vs frequency) will be updated shortly. 11245   Fri Apr 24 21:31:20 2015 KojiSummaryCDSautomatic mxstream resetting We were too much annoyed by frequent stall of mxstream. We'll update the RCG when time comes (not too much future). But for now, we need an automatic mxstream resetting. I found there is such a script already. /opt/rtcds/caltech/c1/scripts/cds/autoMX So this script was registered to crontab on megatron. It is invoked every 5 minutes. # Auto MXstream reset when it fails 0,5,10,15,20,25,30,35,40,45,50,55 * * * * /opt/rtcds/caltech/c1/scripts/cds/autoMX >> /opt/rtcds/caltech/c1/scripts/cds/autoMX.log 11252   Sun Apr 26 00:56:21 2015 ranaSummaryComputer Scripts / Programsproblems with new restart procedures for elogd and apache Since the nodus upgrade, Eric/Diego changed the old csh restart procedures to be more UNIX standard. The instructions are in the wiki. After doing some software updates on nodus today, apache and elogd didn't come back OK. Maybe because of some race condition, elog tried to start but didn't get apache. Apache couldn't start because it found that someone was already binding the ELOGD port. So I killed ELOGD several times (because it kept trying to respawn). Once it stopped trying to come back I could restart Apache using the Wiki instructions. But the instructions didn't work for ELOGD, so I had to restart that using the usual .csh script way that we used to use. 11267   Fri May 1 20:33:31 2015 ranaSummaryComputer Scripts / Programsproblems with new restart procedures for elogd and apache Same thing again today. So I renamed the /etc/init/elog.conf so that it doesn't keep respawning bootlessly. Until then restart elog using the start script in /cvs/cds/caltech/elog/ as usual. I'll let EQ debug when he gets back - probably we need to pause the elog respawn so that it waits until nodus is up for a few minutes before starting. Quote: Since the nodus upgrade, Eric/Diego changed the old csh restart procedures to be more UNIX standard. The instructions are in the wiki. After doing some software updates on nodus today, apache and elogd didn't come back OK. Maybe because of some race condition, elog tried to start but didn't get apache. Apache couldn't start because it found that someone was already binding the ELOGD port. So I killed ELOGD several times (because it kept trying to respawn). Once it stopped trying to come back I could restart Apache using the Wiki instructions. But the instructions didn't work for ELOGD, so I had to restart that using the usual .csh script way that we used to use. 11268   Sun May 3 01:04:19 2015 ranaSummaryPEMSeismo signals are bad https://ldas-jobs.ligo.caltech.edu/~max.isi/summary/day/20150502/pem/seismic/ Looks like some of our seismometers are oscillating, not mounted well, or something like that. No reason for them to be so different. Which Guralp is where? And where are our accelerometers mounted? 11270   Mon May 4 10:21:09 2015 manasaSummaryGeneralDelay line frequency discriminator for FOL error signal Attached is the schematic of the analog DFD and the plot showing the zero-crossing for a delay line length of 27cm. The bandwidth for the linear output signal obtained roughly matches what is expected from the length difference (370MHz) . We could use a smaller cable to further increase our bandwidth. I propose we use this analog DFD to determine the range at which the frequency counter needs to be set and then use the frequency counter readout as the error signal for FOL. Attachment 1: DFD.png Attachment 2: DFD_resp.png 11272   Mon May 4 12:42:34 2015 manasaSummaryGeneralDelay line frequency discriminator for FOL error signal Koji suggested that I make a cosine fit for the curve instead of a linear fit. I fit the data to $V(f) = A + B cos(2\pi f_{b}L/v)$ where L - cable length asymmetry (27 cm) , fb - beat frequency and v - velocity of light in the cable (2*10m/s) The plot with the cosine fit is attached. Fit coefficients (with 95% confidence bounds): A =      0.4177  (0.3763, 0.4591) B =       2.941  (2.89, 2.992) Attachment 1: DFD_cosfit.png 11300   Mon May 18 14:46:20 2015 manasaSummaryGeneralDelay line frequency discriminator for FOL error signal Measuring the voltage noise and frequency response of the Analog Delay-line Frequency Discriminator (DFD) The schematic and an actual photo of the setup is shown below. The setup was checked to be physically sturdy with no loose connections or moving parts. The voltage noise at the output of the DFD was measured using an SR785 signal analyzer while simultaneously monitoring the signal on an oscilloscope. The noise at the output of the DFD was measured for no RF input and at several RF input frequencies including the zero crossing frequency and the optimum operating frequency of the DFD (20MHz). The plot below show the voltage noise for different RF inputs to the DFD. It can be seen that the noise level is slightly lower at the zero crossing frequency where the amplitude noise is eliminated by the DFD. I also did measurements to obtain the frequency response of the setup as the cable length difference has changed from the prior setup. The cable length difference is 21cm and the obtained linear signal at the output of the DFD extends over ~ 380MHz which is good enough for our purposes in FOL. A cosine fit to the data was done as before. //edit- Manasa: The gain of SR560 was set to 20 to obtain the data shown below// Fit Coefficients (with 95% confidence bounds): a =     -0.8763  (-1.076, -0.6763) b =       3.771  (3.441, 4.102) Data and matlab scripts are zipped and attached. Attachment 4: DFD.zip 11368   Mon Jun 22 12:57:09 2015 ericqSummaryLSCX/Y green beat mode overlap measurement redone I took measurements at the green beat setup on the PSL table, and found that our power / mode overlap situation is still consistent with what Koji and Manasa measured last September [ELOG 10492]. I also measured the powers at the BBPDs with the Ophir power meter. Both mode overlaps are around 50%, which is fine. The beatnote amplitudes at the BBPD outputs at a frequency of about 50MHz are -20.0 and -27.5 dBm for the X and Y beats, respectively. This is consistent with the measured optical power levels and a PD response of ~0.25 A/W at 532nm. The main reason for the disparity is that there is much more X green light than Y green light on the table (factor of ~20), and the greater amount of green PSL light on the Y BBPD (factor of ~3) does not quite make up for it. One way to punch up the Y beat a little might be to adjust the pickoff optics. Of 25uW of Y arm transmitted green light incident on the polarizing beamsplitter that seperates the X and Y beams, only 13uW makes it to the Y BBPD, but this would only win us a couple dBms at most. In any case, with the beat setup as it exists, it looks like we should design the next beatbox iteration to accept RF inputs of around -20 to -30 dBm. In the style of the referenced ELOG, here are today's numbers.             XARM   YARM o BBPD DC output (mV)  V_DARK:   +  1.0  + 2.2  V_PSL:    +  7.1  +21.3  V_ARM:    +165.0  + 8.2 o BBPD DC photocurrent (uA) I_DC = V_DC / R_DC ... R_DC: DC transimpedance (2kOhm)  I_PSL:       3.6   10.7  I_ARM:      82.5    4.1 o Expected beat note amplitude I_beat_full = I1 + I2 + 2 sqrt(e I1 I2) cos(w t) ... e: mode overwrap (in power) I_beat_RF = 2 sqrt(e I1 I2) V_RF = 2 R sqrt(e I1 I2) ... R: RF transimpedance (2kOhm) P_RF = V_RF^2/2/50 [Watt]      = 10 log10(V_RF^2/2/50*1000) [dBm]      = 10 log10(e I1 I2) + 82.0412 [dBm]      = 10 log10(e) +10 log10(I1 I2) + 82.0412 [dBm] for e=1, the expected RF power at the PDs [dBm]  P_RF:      -13.2  -21.5 o Measured beat note power (no alignment done)       P_RF:      -20.0  -27.5  [dBm] (53.0MHz and 46.5MHz)      e:       45.7   50.1  [%]                          11370   Mon Jun 22 14:53:37 2015 ranaSummaryLSCX/Y green beat mode overlap measurement redone • Why is there a factor of 20 power difference? Some of it is the IR laser power difference, but I thought that was just a factor of 4 in green. • Why is the mode overlap only 50% and not more like 75%? • IF we have enough PSL green power, we could do the Y-beat with a 80/20 instead of a 50/50 and get better SNR. • The FFD-100 response is more like 0.33 A/W at 532 nm, not 0.25 A/W. In any case, this signal difference is not big, so we should not need a different amplifier chain for the two signals. The 20 dB of amplification in the BeatBox was a fine way, but not great in circuit layout. The BBPD has an input referred current noise of 10 pA/rHz and a transimpedance of 2 kOhm, so an output voltage noise of 20 nV/rHz (into 50 Ohms). This would be matched by an Amp with NF = 26 dB, which is way worse than anything we could bur from mini-circuits, so we should definitely NOT use anything like the low-noise, low output power amps used currently (e.g. ZFL-1000LN....never, ever use these for anything). We should use a single ZHL-3A-S (G = 25 dB, NF < 6 dB, Max Out = 30 dBm) for each channel (and nothing else) before driving the cables over to the LSC rack into the aLIGO demod board. I just ordered two of these now. 11384   Tue Jun 30 11:33:00 2015 JamieSummaryCDSprepping for CDS upgrade This is going to be a big one.  We're at version 2.5 and we're going to go to 2.9.3. RCG components that need to be updated: • mbuf kernel module • mx_stream driver • iniChk.pl script • daqd • nds Supporting software: • EPICS 3.14.12.2_long • ldas-tools (framecpp) 1.19.32-p1 • libframe 8.17.2 • gds 2.16.3.2 • fftw 3.3.2 Things to watch out for: • RTS 2.6: • raw minute trend frame location has changed (CRC-based subdirectory) • new kernel patch • RTS 2.7: • supports "commissioning frames", which we will probably not utilize.  need to make sure that we're not writing extra frames somewhere • RTS 2.8: • "slow" (EPICS) data from the front-end processes is acquired via DAQ network, and not through EPICS.  This will increase traffic on the DAQ lan.  Hopefully this will not be an issue, and the existing network infrastructure can handle it, but it should be monitored. 11390   Wed Jul 1 19:16:21 2015 JamieSummaryCDSCDS upgrade in progress ## The CDS upgrade is now underway Here's what's happened so far: • Installed and linked in all the RTS supporting software packages in /opt/rtapps (only on front end machines and fb): controls@c1lsc ~ 2$find /opt/rtapps/ -mindepth 1 -maxdepth 1 -type l -ls 12582916 0 lrwxrwxrwx 1 controls 1001 12 Jul 1 13:16 /opt/rtapps/gds -> gds-2.16.3.2 12603452 0 lrwxrwxrwx 1 controls 1001 10 Jul 1 13:17 /opt/rtapps/fftw -> fftw-3.3.2 12603451 0 lrwxrwxrwx 1 controls 1001 15 Jul 1 13:16 /opt/rtapps/libframe -> libframe-8.17.2 12603450 0 lrwxrwxrwx 1 controls 1001 13 Jul 1 13:16 /opt/rtapps/libmetaio -> libmetaio-8.2 12582915 0 lrwxrwxrwx 1 controls 1001 34 Jul 1 15:24 /opt/rtapps/framecpp -> ldas-tools-1.19.32-p1/linux-x86_64 12582914 0 lrwxrwxrwx 1 controls 1001 20 Jul 1 13:15 /opt/rtapps/epics -> epics-3.14.12.2_long • Checked out the RTS source for the version we'll be using: 2.9.4 /opt/rtcds/rtscore/tags/advLigoRTS-2.9.4 • built and installed all of the RTS components: • mbuf • mx_stream • daqd • nds • awgtpman • mx_stream is not working. Unknown why. It won't start on the front end machines (only tested on c1lsc so far) with the following error: controls@c1lsc ~ 1$ /opt/rtcds/caltech/c1/target/fb/mx_stream -s c1x04 c1lsc c1ass c1oaf c1cal -d fb:0 send len = 263596 mx_connect failed Remote Endpoint is Closed controls@c1lsc ~ 1\$ Have contact Keith T. and Rolf B. for backup.  This is a blocker, since this is what ferries the data from the front ends. • Rebuilt almost all models.  This was good.  Initially nothing would compile because of IPC creation errors, so I moved the old chans/ipc/C1.ipc file out of the way and generated a new one and then everything compiled (of course senders have to be compiled before receivers). I only had to fix a couple of things in the models themselves: • c1ioo - unterminated FiltCtrl inputs • C1_SUS_SINGLE_CONTROL - unterminated FiltCtrl inputs • c1oaf - bad part named "STATIC". There is some hacky namespace stuff going on in the RCG. I was able to just explode that part and it now works. • c1lsc - unterminated FiltCtrl inputs Haven't installed or tried to run anything yet, but the fact they compile is good. Some models are not compiling because they have C code in src blocks that are throwing errors: • c1lsc • c1cal It shouldn't be too hard to fix whatever is causing those compile errors. That's it for today.  Will pick up again first thing tomorrow 11392   Tue Jul 7 17:22:16 2015 JessicaSummary Time Delay in ALS Cables I measured the transfer functions in the delay line cables, and then calculated the time delay from that. The first cable had a time delay of 1272 ns and the second had a time delay of 1264 ns. Below are the plots I created to calculate this. There does seem to be a pattern in the residual plots however, which was not expected. The R-Square parameter was very close to 1 for both fits, indicating that the fit was good. Attachment 1: cableA_fit.jpg Attachment 2: cableA_resid.jpg Attachment 3: cableB_fit.jpg Attachment 4: cableB_resid.jpg 11393   Tue Jul 7 18:27:54 2015 JamieSummaryCDSCDS upgrade: progress! After a couple of days of struggle, I made some progress on the CDS upgrade today: ## Front end status: • mbuf kernel module built installed • All front ends have been rebooted with the latest patched kernel (from 2.6 upgrade) • All models have been rebuilt, installed, restarted.  Only minor model issues had to be corrected (unterminated unused inputs mostly). • awgtpman rebuilt, and installed/running on all front-ends /opt/open-mx -> open-mx-1.5.2 • All front ends running latest version of mx_stream, built against 2.9.4 and open-mx-1.5.2. We have new GDS overview screens for the front end models: It's possible that our current lack of IRIG-B GPS distribution means that the 'TIM' status bit will always be red on the IOP models.  Will consult with Rolf. There are other new features in the front ends that I can get into later. ## DAQ (fb) status: • daqd and nds rebuilt against 2.9.4, both now running on fb 40m daqd compile flags: cd src/daqd ./configure --enable-debug --disable-broadcast --without-myrinet --with-mx --enable-local-timing --with-epics=/opt/rtapps/epics/base --with-framecpp=/opt/rtapps/framecpp make make clean install daqd /opt/rtcds/caltech/c1/target/fb/ However, daqd has unfortunately been very unstable, and I've been trying to figure out why.  I originally thought it was some sort of timing issue, but now I'm not so sure. I had to make the following changes to the daqdrc: set gps_leaps = 820108813 914803214 1119744016; That enumerates some list of leap seconds since some time.  Not sure if that actually does anything, but I added the latest leap seconds anyway: set symm_gps_offset=315964803; This updates the silly, arbitrary GPS offset, that is required to be correct when not using external GPS reference. Finally, the last thing I did that finally got it running stably was to turn off all trend frame writing: # start trender; # start trend-frame-saver; # sync trend-frame-saver; # start minute-trend-frame-saver; # sync minute-trend-frame-saver; # start raw_minute_trend_saver; For whatever reason, it's the trend frame writing that that was causing things daqd to fall over after a short amount of time.  I'll continue investigating tomorrow. We still have a lot of cleanup burt restores, testing, etc. to do, but we're getting there. 11395   Wed Jul 8 17:46:20 2015 JessicaSummaryGeneralUpdated Time Delay Plots I re-measured the transfer function for Cable B, because the residuals in my previous post for cable B indicated a bad fit. I also realized I had made a mistake in calculating the time delay, and calculated more reasonable time delays today. Cable A had a delay of 202.43 +- 0.01 ns. Cable B had a delay of 202.44 +- 0.01 ns. Attachment 1: resid_CableA.png Attachment 2: resid_CableB.png ELOG V3.1.3-
2022-12-02 16:05:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5611711144447327, "perplexity": 5520.382021212471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710909.66/warc/CC-MAIN-20221202150823-20221202180823-00153.warc.gz"}
http://www.etiquettehell.com/smf/index.php?topic=51263.msg2963664
### Author Topic: Special Snowflake Stories  (Read 5786008 times) 2 Members and 3 Guests are viewing this topic. #### Virg • Super Hero! • Posts: 5886 ##### Re: Special Snowflake Stories « Reply #21690 on: June 17, 2013, 12:48:32 PM » "Well, I'd turned my text tone off when I went to bed the other night..only to be jolted awake by my phone ringing at 12:15 a.m. It was co-worker...because I "hadn't answered her texts" that she'd sent me at 11:30 p.m. These were not work related texts, BTW, and nothing important." I'd block her number entirely.  If anyone at work complained, I would simply tell them that she keeps sending non-work related texts and calls at late hours. artk2002 wrote: "It would be interesting to see the reaction if someone said "Sure, I'll take that $100. I'd like to see a photo ID, please."" The fridge logic in this is that someone who's knowingly passing counterfeit currency is probably skilled enough to have a fake ID that would pass muster. I'd rather just say no and not have to deal with any issues. Virg #### jedikaiti • Swiss Army Nerd • Hero Member • Posts: 2914 • A pie in the hand is worth two in the mail. ##### Re: Special Snowflake Stories « Reply #21691 on: June 17, 2013, 12:50:09 PM » LadyClaire wrote: "Well, I'd turned my text tone off when I went to bed the other night..only to be jolted awake by my phone ringing at 12:15 a.m. It was co-worker...because I "hadn't answered her texts" that she'd sent me at 11:30 p.m. These were not work related texts, BTW, and nothing important." I'd block her number entirely. If anyone at work complained, I would simply tell them that she keeps sending non-work related texts and calls at late hours. Absolutely. Or assign a silent ringtone to that number. What part of v_e = \sqrt{\frac{2GM}{r}} don't you understand? It's only rocket science! "The problem with re-examining your brilliant ideas is that more often than not, you discover they are the intellectual equivalent of saying, 'Hold my beer and watch this!'" - Cindy Couture #### LadyClaire • Super Hero! • Posts: 9915 ##### Re: Special Snowflake Stories « Reply #21692 on: June 17, 2013, 01:06:04 PM » LadyClaire wrote: "Well, I'd turned my text tone off when I went to bed the other night..only to be jolted awake by my phone ringing at 12:15 a.m. It was co-worker...because I "hadn't answered her texts" that she'd sent me at 11:30 p.m. These were not work related texts, BTW, and nothing important." I'd block her number entirely. If anyone at work complained, I would simply tell them that she keeps sending non-work related texts and calls at late hours. Absolutely. Or assign a silent ringtone to that number. I did that the next morning, as soon as my head was clear enough to figure it out. #### Elfmama • Super Hero! • Posts: 6309 ##### Re: Special Snowflake Stories « Reply #21693 on: June 17, 2013, 01:09:34 PM » Well, in a store you would expect to be able to get change for a large bill. But asking at a yard sale or a craft fair or a flea market if they have change for$100, first thing in the morning?  Either SS or someone trying to pass off a counterfeit.  You might lose a sale if it's just SSness, but you'll lose a lot more, all day long, when you don't have change for $5. « Last Edit: June 17, 2013, 01:14:25 PM by Elfmama » ~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~ It's true. Money can't buy happiness. You have to turn it into books first. ~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~ #### RegionMom • Super Hero! • Posts: 6244 • ♪♫ ♫ ♪ ♫ ♪♫ ♪ ♪♪♫ ♪♫ ♪♫ ##### Re: Special Snowflake Stories « Reply #21694 on: June 17, 2013, 01:54:35 PM » But even stores might not have enough if it is just opened, or late at night- "we carry no large bills past 10pm" some places post. Banks and credit unions are truly the best places to get the proper cash. my fave yard sales are the "moving, everything must go!" where they practically pay YOU to take their stuff away! Although, there was one time, just last month, when I happened upon a huge yardsale, on a weekday, and I just had to stop and look around, even though I had little cash. I found a lovely item, a work of art, way above an amount I had on me. I mentioned where and how it could be used, for educational purposes at a specific location, and the owner said, "that school? For that age and teacher? Ya know waht, I bet we can just do a donation on that one!" So I gave them my card and even more details on where it would be going. I did a write up on the back and presented it to the teacher a few days later. The item now has a place of honor in her classroom, and the kids were very happy to receive it. So, yes, I got it for free, but all the people involved were extra happy about it! Fear is temporary...Regret is forever. #### otterwoman • Hero Member • Posts: 1061 ##### Re: Special Snowflake Stories « Reply #21695 on: June 17, 2013, 02:16:41 PM » Re:$100 bills and garage sales I, personally, would never accept a $100 bill from anyone at a garage sale, even if I had lots of change. Or even if I were selling something on Craigslist. In fact, when I place ads on Craigslist for things, I always write, "Cash only. No$100's." $100's are the most counterfeited bills out there. I'd be extremely suspicious of someone trying to pay for a small item with one of them. I agree! I sold a car on Craigslist once and when the buyer showed up with 100s, I'm sure they thought I was the SS since I insisted on testing all the bills with a special marker. There is a way to fool the pens; hairspray. The pen ink won't hit the paper, so it doesn't change. This lists some things to look for: http://www.secretservice.gov/money_detect.shtml Also, the ink never dries. You should be able to rub a bill on white paper and see the green ink on the paper. This works well for new bills that haven't gone through the washer in someone's pocket. #### wonderfullyanonymous • Hero Member • Posts: 2613 ##### Re: Special Snowflake Stories « Reply #21696 on: June 17, 2013, 03:03:37 PM » A short PSA... When testing any money for authenticity, even with the marker, always check the watermark.Make sure the face on the bill matches the face on the watermark. That money, if it was originally a 1 or a 10, and has been bleached to a 100 or a 5 bleached to a 50, the marker will show it as authentic, when in fact it is counterfit. Also, because counterfitters are using cotton when they are making their fake money, those markers will also not work on them. Always, always, always, check for that water mark. #### mmswm • Hero Member • Posts: 2417 ##### Re: Special Snowflake Stories « Reply #21697 on: June 17, 2013, 03:06:55 PM » RE late calling coworker: I have my phone set so that between certain hours or while I have certain apps running, only a few select numbers will get through and all the rest go straight to voicemail without ever alerting me. Some people lift weights. I lift measures. It's a far more esoteric workout. - (Quoted from a personal friend) #### pierrotlunaire0 • Hero Member • Posts: 4356 • I'm the cat's aunt! ##### Re: Special Snowflake Stories « Reply #21698 on: June 17, 2013, 03:33:01 PM » Last Wednesday, I had a SS at the DMV. BG: It is summer, and it is extremely busy here. The busiest time of the year, with wait times averaging at least an hour. At one point in the afternoon, a 3 year old starts screaming. Blood curdling screams. Coworkers said that I visibly started every time she let one loose. I didn't say anything although I did notice that the mother was ignoring her (the child would periodically run over to her to scream about her siblings). I was just trying to work as fast as I could to get everyone out. Finally, my lead worker asked the mother to please take her child out into the foyer. Lead worker explained that we would notify her when it was her turn, but that we had people attempting to take driving tests and her child was too disruptive. The mother's response was that it was our fault: we should set up a play area for children (where? we have no room) with crayons and coloring books and toys (and who pays for this?) so her children could play. Another customer yelled: "This isn't day care, lady!" That made her angrier. When it was her turn, she pitched a tantrum that put her child to shame. The next day, I had customers return (they couldn't take the noise any more) to complete their business. One test taker came back with ear plugs. I have enough lithium in my medicine cabinet to power three cars across a sizeable desert. Which makes me officially...Three Cars Crazy #### kherbert05 • Super Hero! • Posts: 10537 ##### Re: Special Snowflake Stories « Reply #21699 on: June 17, 2013, 04:11:30 PM » Aside from the awfulness of what she said, why do all stories about SS waiting to see obstetricians always involve them declaring "I'm pregnant" in a room full of pregnant women? I suppose being a SS makes them totally blind to all the other bumps in the room because no-one else features in their world. Exactly why I turned down a transfer to OB/GYN. I was also offered one to Pediatrics; not going there, either. (I found out that after I resigned from my job and moved out of state the job I'd been doing was eliminated. That's why I was getting all these offers for lateral transfers. But they couldn't pay me enough to work for either OB/GYN or Pediatrics. People who work in both of those places and love it are very special, but I don't work well with the public and even less well with Special Snowflakes.) I tihnk that is a wise move. Some of the things I've seen in my OB's office are very SS like. Thankfully he has awesome reception staff who are very good at making it clear that his clients in labour come first while still being polite and calm. And you know the woman who demand their appointment be on time and that the OB should leave the birth he is attending for it are also the ones who will expect him to be at their side every minute of their labour! My Mom always maintained that the only doctors she would wait for were OB's, Cardiologists, and Trauma Doctors. The others needed to schedule in such a way they could stay on time. Our Pediatrician was old fashioned. We rarely if ever had to wait for him. He also kept open times in his schedule so that he could handle emergencies. There was one exam room off to the side away from the main ones. I was put in there immediately if I was having a follow up after an allergic reaction - because medication to suppress an full on reaction actually suppress your immune system. That room kept kids like me away from the sick kids. I remember hearing the nurses explain that no I wasn't cutting, I just couldn't be in the waiting room. (Mom usually gave a fuller explanation if the other parents were nice to the nurse - she saw it as an opportunity to educate people) Don't Teach Them For Your Past. Teach Them For Their Future #### weeblewobble • Hero Member • Posts: 3398 ##### Re: Special Snowflake Stories « Reply #21700 on: June 17, 2013, 04:12:42 PM » Cousin Jason showed up for the wedding and had a blast at the reception. He was the life of the party, drinking and dancing. As the evening wore down, he acknowledged that he was in no shape to drive. So he started to ask around, trying to find other guests at the wedding who had booked a room at the hotel for the evening, hoping he could crash with them. Most of the couples he spoke to turned him down and gently suggested that he go book a room, himself- as the hotel still had several available. But Cousin Jason didn't want to pay for his own room at this point- he just wanted someone else to put him up for free. As the couples all turned in for the night, he became increasingly vocal about how he needed some place to crash. And this is how Cousin Jason ended up sleeping on the pull out couch in the bridal suite-- crashing in the room the bride and groom had booked for their honeymoon night. Ugh. What a lovely memory for the happy couple. Waking up on their first morning as man and wife, and having a relation sprawled out asleep just across the room. That's horrifying, but the happy(?) couple should have maintained their spines. #### mumma to KMC • Member • Posts: 640 ##### Re: Special Snowflake Stories « Reply #21701 on: June 17, 2013, 04:23:52 PM » snip The mother's response was that it was our fault: we should set up a play area for children (where? we have no room) with crayons and coloring books and toys (and who pays for this?) so her children could play. As a mother of five (yes they are all mine) I make sure we take crayons (or colored pencils), coloring books or blank paper, and a book or two for reading when we venture out to places where there might be a wait. I don't expect anyone else to "entertain" my children That being said, if we go to a place that has a designated kid area, they are not allowed to touch anything b/c I'm kind of a germaphobe! #### Virg • Super Hero! • Posts: 5886 ##### Re: Special Snowflake Stories « Reply #21702 on: June 17, 2013, 04:27:17 PM » wheeitsme wrote: "I got a$100 bill for my birthday (I'm over 40 and my Dad is not a shopper, LOL).  And when I went to the big box building store this weekend and bought $12 worth of stuff, I asked if they would be able to take it. I asked. And I told them that if they couldn't, I would pay with my debit. The cashier said that she thought she could, opened her drawer and confirmed that it wouldn't be a problem. And that's the polite way to do it" I have to disagree. Unless the business is obviously small or there's some extant circumstance like a sign that says they don't accept larger than a certain denomination, I don't see a problem with paying by whatever means I have, and I don't believe that asking about it is the only polite way to do it. To be honest, I'd be pretty surprised if a big box store had trouble handling a c-note, and I wouldn't think to ask. I agree that a big bill can be a burden for a corner coffee shop or a newsstand (and certainly would be bad news for a garage sale) but a grocery store or a non-kiosk at the mall should be prepared to handle hundreds. Virg #### lemons • Jr. Member • Posts: 36 ##### Re: Special Snowflake Stories « Reply #21703 on: June 17, 2013, 04:38:46 PM » I could have a laundry list of the SS that I encountered this past weekend at my church's yard sale, but I'll keep the list short. First off, let me explain how my church does its yard sale. We have what we call a "Priceless" yard sale, where parishoners bring their items, set them up on tables on the church lawn (right by the road that goes in front of the church, so as to increase traffic), but we don't price anything. We ask people what they are willing to pay and then (hopefully) come to an agreement on the price. There is a central table to pay, so shoppers can buy from multiple people and only pay once. Generally, people run their own table and agree to prices only for their merchandise. All proceeds go to the church, no person gets a monetary compensation for the items they bring to the sale. We try to get good prices for the items, as the church could use the money. 1st SS: My tables (I had 5!) were completely full with baby/toddler toys and books. I had 2 full tables of Little People sets - all were new looking and complete - I had gone through all of them, matched up what figures came with each set, put them in a ziploc bag and taped the bag to the building (barn, house, etc.). I had one man pick up the barn, which had 7 animals plus a farmer, and offer$1.  I said, no, I needed at least $4 (this is an item that retails for$25, mine had been gently used by 1 child and looked brand new).  He made a derogatory comment about my prices being too high and put it down.  He then picked up a plastic semi-hauler (for toy cars - this was Big Mac from Cars, if you recognize it) and offered me a quarter.  I said No, I'd like at least $2 (retailed for$13 or so, kid barely ever touched it).  He put it down and picked up a wooden train (engine and 2 cars) that was still in its packaging - never opened.  He offered me a quarter for that as well.  Again, I asked for $2, he put it down (this was a Thomas brand train, goes for at least$20 in the store).  I saw him later with the semi and train walking to his car.  I went to the cashier's table and asked what he paid, because we hadn't come to an agreement on the price.  The woman at the table said $1 and he told her that I said it was ok! What a low-life, basically stealing money from a church! SS2: A woman was looking at the items on all the tables, and eventually came up to the cashier's table with a toy car (kind of like the batmobile, with several figures all together), 5 baby board books and a rattle and asked how much it would be. We asked her to make us an offer, what she felt it would be worth. Her response: "I don't have any money on me". Who comes to a yard sale with no money? We all kind of looked at her and she asked if we would give it to her. When we said no, she said she'd go look in her car to see what she had. She came back with change and asked us if we'd take$1.33 for all of it.  Everyone looked at me (since it was my stuff) and I gently said that I was sorry, but I couldn't let the car go for that.  She was welcome to the books (I was selling those for $.25 each, usually) and the rattle, but I knew that someone would give us a couple of bucks for the car and I felt it was my duty to get as good of a price as I could for the church. Luckily, she didn't argue and took the books and the rattle and left. This is why I'm not a fan of not putting prices on items at the yard sale, but I understand why we don't. If people had to price their items, we would get a lot less items donated to the sale because it takes so much extra work. Doing it this way, people gather their items, work the sale (if they want) which runs from 8am - 1pm and their work is done. Pricing stuff can take hours and some people just don't want to deal with the hassle. #### kherbert05 • Super Hero! • Posts: 10537 ##### Re: Special Snowflake Stories « Reply #21704 on: June 17, 2013, 04:46:52 PM » On the subject of$100 bills - Dad's boss used to give his adult kids and their spouses $10,000 each year for Christmas - cash money. He sent his secretary to the bank to get the money. One year the bank informed her they no longer made bills in either the$1,000 or $10,000 (was there ever a$10,00 bills). The poor guy honest as the day is long got a visit from both the IRS and the DEA. They believed him thank goodness. Now he gives them money orders. Dad would get a large sum of money out when we traveled. One time when I was 16, we were getting ready to leave the next morning but we were out of milk. Dad sent me to safeway to get a quart of milk with $100 bill at 10 pm. I fell all over myself apologizing to the clerk, who assured me it was ok. Thing was they had just installed scanners and still had the volume turned up so it announced I was getting$98 or something near that back in change. There were some sketchy guys that had leaving. They heard that stepped outside and stopped. I went to the manager and explained what had happened. He walked me to my car. It takes us 2 days to get to PEI. Dad Calls into the office to check in and they ask if I'm Ok. Apparently the safeway manager brought the security issue to the attention of his higher ups using my name/Mr. Herbert's daughter in the story. The higher ups (district not corporate) had called Dad's office to apologize and say they were looking into getting the volume turned off.  (Dad was president of one of 3 Miller distributorships in Houston so he knew all the grocery higher ups) Kind of related. I got travelers checks last time I went to PEI. American Express. I went to my Nanna's bank to cash one and they refused. Apparently there had been a rash of counterfeit ones, and no-one was accepting them. I was standing there jaw on the floor trying to figure out what to do. The teller asked me to move and asked the person behind what she needed. Michelle was maybe 10, she said nothing I'm just with my cousin pointing to me. My cousin on Daddy's side, her mom is my Aunt (Mom's name). I honestly think Michelle knew exactly what she was doing and what type of politics she was playing. Since I was my grandparents grandchild they cashed the checks and gave me a card to show any other teller at their bank to cash any other checks the rest of my stay. Don't Teach Them For Your Past. Teach Them For Their Future
2014-12-25 02:15:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32488638162612915, "perplexity": 3497.4680514968663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447544596.14/warc/CC-MAIN-20141224185904-00003-ip-10-231-17-201.ec2.internal.warc.gz"}
http://dataspace.princeton.edu/jspui/handle/88435/dsp015m60qs05q
Skip navigation Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp015m60qs05q Title: Novel Hydroxyapatite Coatings for the Conservation of Marble and Limestone Authors: Naidu, Sonia Advisors: Scherer, George W Contributors: Chemical and Biological Engineering Department Keywords: acid raincalciteconsolidationhydroxyapatitelimestonemarble Subjects: Chemical engineeringMaterials ScienceCivil engineering Issue Date: 2014 Publisher: Princeton, NJ : Princeton University Abstract: Marble and limestone are calcite-based materials used in the construction of various structures, many of which have significant artistic and architectural value. Unfortunately, due to calcite's high dissolution rate, these stones are susceptible to chemically-induced weathering in nature. Limestone, due to its inherent porosity, also faces other environmental weathering processes that cause weakening from disintegration at grain boundaries. The treatments presently available are all deficient in one way or another. The aim of this work is to examine the feasibility of using hydroxyapatite (HAP) as a novel protective coating for marble and limestone, with two goals: i) to reduce acid corrosion of marble and ii) to consolidate physically weathered limestone. The motivation for using HAP is its low dissolution rate and structural compatibility with calcite. Mild, wet chemical synthesis routes, in which inorganic phosphate-based solutions were reacted with marble and limestone, alone and with other precursors, were used to produce HAP films. Film nucleation, growth and phase evolution were studied on marble to understand film formation and determine the optimal synthesis route. An acid resistance test was developed to investigate the attack mechanism on marble and quantify the efficacy of HAP-based coatings. Film nucleation and growth were dependent on substrate surface roughness and increased with calcium and carbonate salt additions during synthesis. Acid attack on marble occurred via simultaneous dissolution at grain boundaries, twin boundaries and grain surfaces. HAP provided intermediate protection against acid attack, when compared to two conventional treatments. Its ability to protect the stone from acid was not as significant as predicted from dissolution kinetics and this was attributed to incomplete coverage and residual porosity within the film, arising from its flake-like crystal growth habit, which enabled acid to access the underlying substrate. The effectiveness of HAP as a consolidant for weathered limestone, alone and coupled with a commercially available consolidant (Conservare® OH-100), was also investigated. To artificially weather limestone in the lab, a reproducible thermal degradation technique was utilised. The dynamic elastic modulus, water sorptivity and coating composition of treated stones were evaluated. HAP was found to be an effective consolidant for limestone, as it restored the elastic modulus of damaged stones to their original values and exhibited superior performance to Conservare® OH-100. URI: http://arks.princeton.edu/ark:/88435/dsp015m60qs05q Alternate format: The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog Type of Material: Academic dissertations (Ph.D.) Language: en Appears in Collections: Chemical and Biological Engineering Files in This Item: File Description SizeFormat Naidu_princeton_0181D_11006.pdf3.96 MBAdobe PDF Naidu_princeton_0181D_408/phosphate.nb1.25 MBUnknown Naidu_princeton_0181D_408/carbonate.nb670.93 kBUnknown Naidu_princeton_0181D_408/marbledissolutionv3.nb509.74 kBUnknown Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.
2017-06-25 14:09:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25337597727775574, "perplexity": 9817.282301084584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320532.88/warc/CC-MAIN-20170625134002-20170625154002-00130.warc.gz"}
https://stats.stackexchange.com/questions/471951/question-on-gam-p-value-in-summary-output
# question on GAM p-value in summary output mod_gam1 <- gam(Overall ~ s(Income, bs="cr"), data=d) summary(mod_gam1) ## ## Family: gaussian ## ## Formula: ## Overall ~ s(Income, bs = "cr") ## Approximate significance of smooth terms: ## edf Ref.df F p-value ## s(Income) 6.9 7.74 16.4 2e-14 *** It significant p-value<0.05 mean that smooth component use for income was correct or that independent variable income had significant effect on overall? I'm new to GAM. I read several comment, paper and lecture not on it but i'm still confuse. It means that given the smoothing function that was applied, there is a significant association between $$income$$ and $$Overall$$. There is no way to judge from the output whether the specific spline you chose via bs="cr" is "correct". This rather depends on which type of model works best for your goal and application. You can check the spline that was fit via plot(model_gam1) and employ some critical reasoning whether this makes sence. If the only "problem" with $$income$$ is that it is right-skewed, which is quite usual, a simple log-transformation might be sufficient, which avoids some interpretability issues of GAMs.
2021-12-09 10:38:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6412153840065002, "perplexity": 4095.9155151015884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363791.16/warc/CC-MAIN-20211209091917-20211209121917-00128.warc.gz"}
http://openstudy.com/updates/4f89b0b8e4b02251ecca030c
## calyne 2 years ago Differentiate the function y = 2x*log[10](sqrt(x)) 1. calyne y' = [(2x)/(ln(10)*sqrt(x)] + [2*log[10](sqrt(x))] 2. calyne correct so far? 3. calyne or no the derivative of log[10](sqrt(x)) would be (1/[sqrt(x)ln(10)])*(1/[2sqrt(x)]) 4. calyne no it's not 5. calyne well according to the textbook the answer is pretty different 6. calyne alright well i've got 1/ln(10) + 2*log[10](sqrt(x)) 7. calyne that's not final though so help me from there 8. calyne somehow the second term (after the +) needs to equal log[10](x) does that make sense 9. experimentX isn't it something like this http://www.wolframalpha.com/input/?i=d%2Fdx%28+2x*log+base+10+%28sqrt%28x%29%29%29 10. calyne sure i guess it's something like that bro jesus you're a great help 11. calyne but that's still not the textbook answer 12. experimentX let me test, dy/dx = 2{dx/dx*log[10](sqrt(x))+x*d(log[10](sqrt(x)))/dx} = 2{log[10](sqrt(x))+x*d(ln(sqrt(x)))/dx*1/ln10} = 2{log[10](sqrt(x))+x*1/sqrt(x)*1/2*1/sqrt(x)*1/ln10} = 2{log[10](sqrt(x))+1/2ln10} my answer. 13. calyne alright here look the textbook answer is 1/ln(10) + log[10](x). 14. experimentX i got almost same answer ... except 1/ln(10) + 2log[10](x) 15. calyne i got as far as 1/ln(x) + 2(log[10](x)), the latter of which = 2[log(sqrt(x))/log(10)], so [2*log(sqrt(x))] / log(10) = log(sqrt(x))^2/log(10) ?? which = log(x)/log(10)? which = log[10](x) ???? is that correct? does that make sense? i'm reeeal rusty on the log business 16. calyne nah bro according to the product rule that other end of the + sign is gonna be the derivative of 2x * the log[10](sqrt(x)). unless that's not the case. but i'm pretty sure it is. don't see any way around it really. and you got the derivative of 2x multiplying the other factor which is supposed to be left as is, so.... you tell me..... if and how i'm wrong..... 17. experimentX use this property of log to convert it into natural log. $\log_ax = \frac{\ln x}{\ln a}$ I think there's a better example at wikipedia. 18. calyne i know that, and it still doesn't explain how 2*log[10](sqrt(x)) = log[10](x). 19. calyne oh unless... that equals log[10]([sqrt(x)]^2) ???? is that it? 20. calyne oh so it is
2014-12-22 10:11:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6662652492523193, "perplexity": 4981.476984321312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802775083.81/warc/CC-MAIN-20141217075255-00005-ip-10-231-17-201.ec2.internal.warc.gz"}
http://stackexchange.com/newsletters/newsletter?site=quant.stackexchange.com
# Quantitative Finance newsletter ## Top new questions this week: ### Do intraday volume and volatility share the same properties? volatility clustering and mean reversion are very well known properties that one could use when trading. Traders, especially in options world, do take realized vol into account (e.g. by forecasting it … ### Linear-Boundary Crossing Problem for Brownian Motion This is a question I came across while reading: $W = (W_t)_{t\geq{0}}$ is a standard BM. Let $\mu\in \mathbb{R}$, and let $\tau_{a}^{\mu}$ = $\inf(t>0;W_t = a + \mu{t})$ be the first passage time … finance brownian-motion quantitative passage-time ### Is Behavioral Finance relevant to quants? This topic has been prompted by the following question: Measuring Behavioral Finance Effects in Fund/Portfolio Manager Analysis After reading it and the comments below I started thinking whether … option-pricing models behavioral-finance answered by SRKX 1 vote ### An alternative to the Gaussian distribution to describe/fit market stock returns After the financial crisis in 2008, many people (including me) don't really believe that stock returns can be described in terms of the normal distribution (Gaussian distribution). But besides the … equities normal-distribution ### What's the underlying idea of definition of constrained market in Skiadas' Asset Pricing Theory? I'm self-studying Skiadas' Asset Pricing Theory, and find the definition of constrained market on page 21 confusing(you can find it here in the sample chapter). Definition 1.26. A constrained … modeling theory market-model convexity intuition ### 3 Factor HJM model, do these factors have an economic meaning? In the HJM model, in case we have 3 factors, do these factors have an economic meaning at all ? interest-rates answered by Probilitator 1 vote ### Attributing change in yield as a result of structural change Suppose your portfolio has $w_0$ amount of bonds with yield $r_0$. Now you buy additional $w_1$ amount of bonds with yield $r_1$, then buy additional $w_2$ amount of bonds with yield $r_2$. … answered by Probilitator 1 vote ## Greatest hits from previous weeks: ### Free paper trading site with an API I've got a quanitative trading model I want to test out in the real stock market. Right now, I'm writing some code to pull "live" quotes from yahoo, feed them to my model, and keep track of the … ### Transformation from the Black-Scholes differential equation to the diffusion equation - and back I know the derivation of the Black-Scholes differential equation and I understand (most of) the solution of the diffusion equation. What I am missing is the transformation from the Black-Scholes … black-scholes differential-equations ## Can you answer these? ### The Public Market Equivalent measure in private equity What are the advantages and disadvantages of the Public Market Equivalent measure in private equity? Why is it that the volatility of the cash flows do not matter? This topic has been discussed in a … equities private pme asked by roland 1 vote ### regarding Basel III IRB method for credit risk Would the exposures between standard method and internal rating based method for credit risk under Basel III remain same?I could not find any documents for IRB approach under Basel III. Is it still … risk risk-management asked by Shubhangi Kulkarni 1 vote ### Fitting a sigmoid function to incomplete, structured, data I have an incomplete data set that looks like this: and I believe that it will continue to form a sigmoid-like shape. How can fit a sigmoid curve to this data given my assumption about what the … curve-fitting asked by user1650502 1 vote Subscribe to more Stack Exchange newsletters Unsubscribe from this newsletter or change your email preferences by visiting your subscriptions page on stackexchange.com. Questions? Comments? Let us know on our feedback site. If you no longer want to receive mail from Stack Exchange, unsubscribe from all stackexchange.com emails. Stack Exchange, Inc. 110 William St, 28th Floor, NY NY 10038 <3
2014-03-09 23:06:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2787332534790039, "perplexity": 2629.3619844583855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010451932/warc/CC-MAIN-20140305090731-00044-ip-10-183-142-35.ec2.internal.warc.gz"}
https://speechprocessingbook.aalto.fi/Enhancement/Bandwidth_extension_BWE.html
# 11.3. Bandwidth extension (BWE)# Different hardware and software use a range of different sampling rates such that often, the sound reproduction side could offer a higher sampling rate than what the recorded signal has. The local hardware is thus better than the sound received. The bandwidth of the output sound thus corresponds to the recorded sound, which is lower in quality than what the hardware can reproduce. Such bandlimitation makes the output sound dull and darker than the original (see Waveform/Sampling rate for audio examples). The objective here is to mend such degradations in audio quality, to restore the audio to best possible quality. Bandwidth extension (BWE) or sometimes referred to as audio super resolution due to its image processing counterpart, corresponds to the task of mapping a signal $$s_l$$ with sample rate $$fs_l$$ to another signal $$s_h$$ with sample rate $$fs_h$$. Typically, $$fs_h$$ is $$r$$ times bigger than $$fs_l$$ where $$r$$ is an integer number and is known as the downsampling factor: $$\frac{fs_h}{fs_l} = r$$ In BWE $$\hat{s}_h$$, the estimate of $$s_h$$, should ideally be perceived as if it was originally digitised at $$fs_h$$, thus, being as close as possible to $$s_h$$ itself. If no additional information other than $$s_l$$ is known, this problem can also be found in literature as blind bandwidth extension (BBWE) . It can be formulated as follows: $$\hat{s}_h=f\left(W, s_l\right)$$ Where $$W$$ is an arbitrary set of parameters. It is important to observe that the aforementioned relationship assumes that $$\hat{s}_h$$ can be estimated from $$s_l$$. This may not be possible for all types of signals and because of this, BWE can be considered an overdetermined problem since several $$\hat{s}_h$$ candidates can be fabricated for any given $$s_l$$. For a particular case (e.g. speech signals) some candidates will be perceptually more appropriate than others, therefore perceptual criteria is highly important in choosing the right estimate. Furthermore, several questions remain open such as how many parameters should $$W$$ contain? or how many samples in the neighbourhood of a given sample are necessary to produce an optimal estimate $$\hat{s}_h$$? Widely used methods in speech processing such as linear prediction exploit the fact that adjacent speech signal samples are correlated and given sufficient sample rate, they are not expected to change abruptly. This factor is advantageous for the case of speech BWE since it may contribute to form a criterion under which more appropriate candidates can be separated from the least appropriate ones. The following plots show first an audio recording sampled at 48kHz and then the same audio but previously downsampled to 16kHz. It is possible to evidence that even when the waveform of both audio examples are very similar, the perceived sound is different due to the missing information in the case of the 16kHz recording. Even when the message can be understood in both cases, it lacks high frequency content, mainly affecting plosives and fricatives. Now, let’s the compare the magnitude spectrogram of both recordings: Now what was previously experienced by listening to the audio samples, can be corroborated by comparing the magnitude spectrogram plots. The audio at 16kHz lacks content above half of its sample frequency, which is what BWE will attempt to reconstruct. # 11.4. Applications# Professional studio recordings for music, films or podcasts will typically be recorded using sample rates of at least 44.1kHz (CD quality). However, other applications such as VoIP telephony target lower sample rates as a way of minimising the amount of data to be transmitted. For instance, bluetooth headphones using the Hands-Free Profile or Headset Profile use sample rates such as 8kHz (known as narrow band) or 16kHz (known as wide band). For this reason, one of the main motivations behind BWE is to overcome the aforementioned limitations in order to deliver comparable quality without affecting the data rate. It is important to mention that succeeding in such objective would not only affect the perceived quality of speech, but may potentially impact downstream tasks such as speaker recognition, speech-to-text (STT) systems or automatic translation. During the last decades, deep learning based digital signal processing has demonstrated outstanding results in several tasks such as speech denoising , speech synthesis or voice conversion . Nevertheless, some solutions involve a significantly bigger amount of computations compared to previously proposed solutions, making them impractical for real time processing. An alternative to mitigate this problem, is to train the network to process the input at a lower sample rate than the expected output of a given application, and then to artificially extend the bandwidth of the resulting signal before sending it to the system output. Since the amount of computations of the majority of neural network layers grow in proportion to the input shape, processing an input of smaller size can greatly improve the overall efficiency of the system. This is specially important in mobile or edge devices where the computational resources are limited. Moreover, in some cases these devices are the only choice because of privacy reasons since an on-device deployment may help to avoid propagating sensitive information to cloud servers. # 11.5. Existing approaches# As previously mentioned, BWE is an overdetermined problem in and of itself. Therefore, the proposed solutions tend to be domain-specific (i.e. speech signals only). Previous solutions proposed for speech coding attach side information extracted from the encoded signal to be used during the decoding process to extend the bandwidth of the resulting output. Other solutions did not rely on auxiliary information and attempted to estimate the spectral envelope instead. State of the art solutions usually involve neural networks that estimate the spectral envelope, the waveform, or a combination of both. The main advantage of deep learning in BWE is related with the fact that the network will learn from the data itself, rather than process the input based on a prescribed set of rules. ## 11.5.1. Time domain networks# One of the most straightforward data driven approaches is to directly use the low bandwidth signal $$s_l$$ as input to estimate the waveform of the high bandwidth signal $$s_h$$. Typical architectures suited for this work involve some form of convolutional neural network or recurrent neural network (gated recurrent units GRUs or long short-term memory network LSTMs) that are able to effectively learn the patterns encountered both short and long term dependencies in the input signal . The main drawback of this approach is that waveforms that look similar to each other do not necessarily sound similar (as experienced with the previously shown audio examples). This type of networks alone may lead to a resulting waveform that preserves the short time features of the original input, but its spectral content could be very different. Typically, the input shape of these networks is the same as the expected output shape. In order to achieve this, the low bandwidth signal $$s_l$$ is first upsampled and then fed to the network. The upsampling technique may vary, but in all cases it will simply provide an interpolated version of the original signal to be fed to the network. ## 11.5.2. Frequency domain networks# To address the perceptual relevance problem mentioned in the previous paragraph, one possible solution is to include a different representation of input the speech signal such as the Short-time Fourier Transform (STFT). In this case, the network will be trained to estimate the high bandwidth signal magnitude spectrum $$\hat{S}_h$$ based on the low bandwidth signal magnitude spectrum $$S_l$$. This will enforce similarity in the frequency content between the ground truth h $$s_h$$ and estimate $$\hat{s}_h$$. This yields to a perceptually similar output, but with some disadvantages: • The effectiveness of the method will depend on the STFT parameters and therefore there will be a time-frequency trade off in which larger FFT sizes may lead to better frequency resolution but poorer time resolution. This may impact the perception of the transients present in the estimate $$\hat{s}_h$$ compared to the ground truth $$s_h$$. • Usually, the phase of the estimate $$\hat{s}_h$$ is extended based on a prescribed set of rules such as copying and/or flipping the phase of the original signal $$s_h$$. Such reconstruction may lead to a waveform with artifacts. It is also important to mention that given a modified STFT (MSTFT), there may not exist a time domain signal whose STFT exactly matches the MSTFT, therefore even if the magnitude spectrogram estimate is optimal, the overall result could still be perceptually inadequate. Some approaches are based on this idea but further elaborate by using representations such as mel spectrogram or MFCCs ## 11.5.3. Mixed approaches# Due to the drawbacks mentioned on both time domain methods and frequency domain methods, one alternative is to combine the advantages of both to mitigate the individual problems each one has . One way to do it is by cascading two networks: First, a frequency domain network A will produce a rough waveform estimate whose magnitude spectrum estimate $$\hat{S}_h$$ matches the magnitude spectrum of the ground truth $$S_h$$. Then, the second network B will help to refine transients and alleviate phasing issues. Since this method comprises two cascaded networks, it is expected that the total size and computational requirements increases if compared to any of the previous deep neural network methods. This may be unimportant for some offline use cases or cloud inferencing, however, it may be a deciding factor for real time applications and mobile on-device inferencing. Other approaches utilize a parallel connection instead of cascading two networks . ## 11.5.4. Generative adversarial networks (GANs)# Generative adversarial networks have demonstrated significant success in several audio processing tasks including BWE . The main idea is that instead of using a single network, GANs correspond to a combination of at least two networks each one playing a different role. The generator $$G$$ role is to generate examples $$\hat{s}_h$$ that should be as close as possible to the desired output $$s_h$$. The discriminator $$D$$ act as a classifier that should distinguish between fake samples $$\hat{s}_h$$ and real samples $$s_h$$. Both networks should be alternately trained to obtain good results. If the discriminator $$D$$ outperforms the generator $$G$$, then it will not be able to learn how to improve generating $$\hat{s}_h$$, causing the so-called vanishing gradients problem. Conversely, if the generator $$G$$ outperforms the discriminator $$D$$, it may lead to mode collapse, meaning that the generator $$G$$ will only generate a handful of examples that are known to fool the discriminator $$D$$. This is not desirable since the end goal is to produce a generator $$G$$ that can ideally generalise to any speech input signal $$s_l$$. When the generator $$G$$ is trained, the discriminator $$D$$ weights will be frozen. $$G$$ will first generate a prediction and the $$D$$ will classify this prediction as real or fake. $$G$$ will then update its gradients to gradually generate better predictions that $$D$$ will not able to distinguish from real fullbandwidth audios. Then, the generator $$G$$ weights are frozen and it will be used to generate the fake examples, namely, examples whose bandwidth has been artificially extended. A combination of fake examples and real examples (ground truth audios) will be fed to the discriminator $$D$$ along with their respective labels. This will help to compare the results obtained by $$D$$ classifying both types of samples against the expected output. If the GAN system successfully converges, then the next step is to disconnect the discriminator $$D$$ and only use the generator $$G$$ for inference. This provides the advantage of getting rid of the computational burden caused by the discriminator $$D$$ during inference. Additionally, some configurations may involve multiple discriminators, for example, each one using a STFT with different FFT size and hop size. Overall, GANs are still an active area of research and many implementation details are fine-tuned by trial and error on a case-by-case basis. # 11.6. Evaluation# Even when there are widely used metrics for automatic speech quality assessment such as Perceptual Evaluation of Speech Quality (PESQ) or Virtual Speech Quality Objective Listener (ViSQOL), they usually require narrow band or wide band inputs to compute the score. For the case of BWE, doing so would defeat the purpose if the goal is to produce a signal whose sample rate is greater than those. For these reasons, automatic quality assessment for fullband speech is an active reseach topic. Mean Opinion Score (MOS) based on subjective evaluation continues to be the gold standard to choose the best candidate for a given tasks, yet this process is deemed as expensive and time-consuming. Moreover, if paper A publishes results that later on paper B claims to outperform by a marginal MOS score, it may be debatable without having further information about the details of the process, the number of subjects or any additional conditions that may help to achieve a fair comparison between both evaluations. The following metrics are the most commonly reported among BWE researchers at the moment of writing this material: ## 11.6.1. Log spectral distance (LSD)# Log spectral distance is a frequency domain metric that measures the logarithmic distance between two spectra. Since it is based on the magnitude spectrum, it will not take into account the correctness of the phase reconstruction: $$\text{LSD}\left(S_h, \hat{S}_h\right)=\frac{1}{M}\sum\limits_{m=0}^{M-1}\sqrt{\frac{1}{K}\sum\limits_{k=0}^{K-1}\left(\text{log}_{10}\frac{\hat{S}_{h}\left(m, k\right)^2}{S_{h}\left(m, k\right)^2}\right)^2}$$ Where $$\hat{S}_h$$ and $$S_h$$ are the magnitude spectrogram of the estimate and ground truth signals, respectively. Some variations of this metric will include only the reconstructed portion of the spectra. In such case, this metric can be found in literature as LSD-HF. For example, if the bandwidth of the signal is extended from 16kHz to 32kHz, LSD-HF will only consider FFT bins representing frequencies above 8kHz. ## 11.6.2. SNR# Signal-to-noise ratio (SNR) provides a logarithmic time domain comparison, but between the ground truth $$s_h$$ and the difference between the ground truth $$s_h$$ and the estimate $$\hat{s}_h$$: $$\text{SNR}\left(\hat{s}_h, s_h\right) = 10\text{log}_{10}\frac{\sum_{n=0}^{N-1}s_h[n]^2}{\sum_{n=0}^{N-1}\left(\hat{s}_h[n]-s_h[n]\right)^2}$$ ## 11.6.3. SI-SDR# Scale-invariant signal-to-distortion ratio (SI-SDR) provides and improved version of signal-to-distortion ratio (SDR) and similarly to SNR, corresponds to a logarithmic time domain comparison between the estimate $$\hat{s}_h$$ and the ground truth $$s_h$$: $$\text{SI-SDR}=10\text{log}_{10}\left(\frac{\lVert e_{\text{target}}\rVert^2}{\lVert e_{\text{res}}\rVert^2}\right)$$ Where: $$e_{\text{target}}=\frac{\hat{s}_h^T s_h}{\lVert s_h\rVert^2}s_h$$ and $$e_{\text{res}}=\hat{s}_h-e_{\text{target}}$$ ## 11.6.4. Mean Opinion Score (MOS)# Mean Opinion Score (MOS) is a subjective measure used typically in - but not limited to - telecommunications engineering that represents the overall quality of a system. In the case of bandwidth extension, a trained listener would assess the resulting quality of a recording whose bandwidth has been artificially extended. ITU-T P.800.1 defines different uses of this score. It is a single integer number in the range of 1 to 5, where the lowest quality is 1 and the highest perceived quality is 5. If various listeners have evaluated a single algorithm, the final MOS would be the arithmetic mean over all the evaluation scores and could therefore produce a non-integer value when averaged. # 11.7. References# BT20 Sebastian Braun and Ivan Tashev. A consolidated view of loss functions for supervised deep learning-based speech enhancement. 2020. doi:10.48550/ARXIV.2009.12286. FOR19 Rafael Ferro, Nicolas Obin, and Axel Roebel. Cyclegan voice conversion of spectral envelopes using adversarial weights. 2019. doi:10.48550/ARXIV.1910.12614. GSAW19 Archit Gupta, Brendan Shillingford, Yannis Assael, and Thomas C. Walters. Speech bandwidth extension with wavenet. 2019. doi:10.48550/ARXIV.1907.04927. HXH+20 Xiang Hao, Chenglin Xu, Nana Hou, Lei Xie, Eng Siong Chng, and Haizhou Li. Time-domain neural network approach for speech bandwidth extension. In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 866–870. 2020. doi:10.1109/ICASSP40776.2020.9054551. HLL20 Tzu-hsien Huang, Chien-yu Lin, Jheng-hao aand Huang, and Hung-yi Lee. How far are we from robust voice conversion: a survey. 2020. doi:10.48550/ARXIV.2011.12063. KKB20 Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae. Hifi-gan: generative adversarial networks for efficient and high fidelity speech synthesis. 2020. doi:10.48550/ARXIV.2010.05646. KEE17 Volodymyr Kuleshov, S. Zayd Enam, and Stefano Ermon. Audio super resolution using neural networks. CoRR, 2017. URL: http://arxiv.org/abs/1708.00853. LTR+20 Yunpeng Li, Marco Tagliasacchi, Oleg Rybakov, Victor Ungureanu, and Dominik Roblek. Real-time speech frequency bandwidth extension. 2020. doi:10.48550/ARXIV.2010.10677. LYX+18 Teck Yian Lim, Raymond A. Yeh, Yijia Xu, Minh N. Do, and Mark Hasegawa-Johnson. Time-frequency networks for audio super-resolution. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 646–650. 2018. doi:10.1109/ICASSP.2018.8462049. LWK+21 Ju Lin, Yun Wang, Kaustubh Kalgaonkar, Gil Keren, Didi Zhang, and Christian Fuegen. A Two-Stage Approach to Speech Bandwidth Extension. In INTERSPEECH, 5. 2021. URL: https://maigoakisame.github.io/papers/interspeech21b.pdf. LAGD18 Zhen-Hua Ling, Yang Ai, Yu Gu, and Li-Rong Dai. Waveform modeling and generation using hierarchical recurrent neural networks for speech bandwidth extension. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26(5):883–894, may 2018. doi:10.1109/taslp.2018.2798811. LTW+15 Bin Liu, Jianhua Tao, Zhengqi Wen, Ya Li, and Danish Bukhari. A novel method of artificial bandwidth extension using deep architecture. In INTERSPEECH. 2015. URL: https://www.isca-speech.org/archive/pdfs/interspeech_2015/liu15g_interspeech.pdf. LGLC20 Gang Liu, Ke Gong, Xiaodan Liang, and Zhiguang Chen. Cp-gan: context pyramid generative adversarial network for speech enhancement. In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 6624–6628. 2020. doi:10.1109/ICASSP40776.2020.9054060. LCL+22 Haohe Liu, Woosung Choi, Xubo Liu, Qiuqiang Kong, Qiao Tian, and DeLiang Wang. Neural vocoder is all you need for speech super-resolution. 2022. doi:10.48550/ARXIV.2203.14941. RWEH18 Jonathan Le Roux, Scott Wisdom, Hakan Erdogan, and John R. Hershey. Sdr - half-baked or well done? 2018. doi:10.48550/ARXIV.1811.02508. SE18 Konstantin Schmidt and Bernd Edler. Blind bandwidth extension based on convolutional and recurrent deep neural networks. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 5444–5448. 2018. doi:10.1109/ICASSP.2018.8462691. SWFJ21 Jiaqi Su, Yunyun Wang, Adam Finkelstein, and Zeyu Jin. Bandwidth extension is all you need. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 696–700. 2021. doi:10.1109/ICASSP39728.2021.9413575. TQSL21 Xu Tan, Tao Qin, Frank Soong, and Tie-Yan Liu. A survey on neural speech synthesis. 2021. doi:10.48550/ARXIV.2106.15561. WWK+18 Mu Wang, Zhiyong Wu, Shiyin Kang, Xixin Wu, Jia Jia, Dan Su, Dong Yu, and Helen Meng. Speech super-resolution using parallel wavenet. In 2018 11th International Symposium on Chinese Spoken Language Processing (ISCSLP), 260–264. 2018. doi:10.1109/ISCSLP.2018.8706637.
2022-12-01 15:59:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6302751302719116, "perplexity": 1245.2254496074065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710829.5/warc/CC-MAIN-20221201153700-20221201183700-00383.warc.gz"}
https://trac.ct2.cryptool.org/browser/trunk/Documentation/Developer/PluginHowTo/part2.tex?rev=845
source:trunk/Documentation/Developer/PluginHowTo/part2.tex@845 Last change on this file since 845 was 845, checked in by Arno Wacker, 12 years ago PluginHowTo • Major style update - now looks fancy (chapter 1 demonstrates how it should look) • Minor exemplary changes in chapter 2 (this is pretty much a construction site here..) File size: 26.4 KB Line 1\chapter{Plugin Implementation} 2In this chapter we provide step-by-step instructions for implementing your own CrypTool 2.0 plugin. The given instructions refer to the usage of MS Visual Studio 2008 Professional, hence before starting you should have installed your copy of MS Visual Studio 2008. 3 4 5 6\section{New project} 7\label{sec:CreateANewProjectInVS2008ForYourPlugin} 8Open Visual Studio 2008 and create a new project: 9 10Select ''.NET-Framework 3.5'' as the target framework (the Visual Studio Express edition don't provide this selection because it automatically chooses the actual target framework), and ''Class Library'' as default template to create a DLL file. Give the project a unique and significant name (here: ''Caesar''), and choose a location where to save (the Express edition will ask later for a save location when you close your project or your environment).  Finally confirm by pressing the ''OK'' button. 11 12Now your Visual Studio solution should look like this: 13 14 15\begin{figure} 16    %includegraphics... 17    \caption{Figure 1}\label{fig:figure1} 18\end{figure} 19 20 21\section{Interface selection} 22\label{sec:SelectTheInterfaceYourPluginWantsToServe} 23First we have to add a reference to the CrypTool library called "CrypPluginBase.dll" where all necessary CrypTool plugin interfaces are declared. 24 25\begin{figure} 26    %includegraphics... 27    \caption{Figure 2}\label{fig:figure2} 28\end{figure} 29 30Make a right click in the Solution Explorer on the ''Reference'' item and choose ''Add Reference''. 31 32Now browse to the path where the library file is located (e.g. ''C:\\Documents and Settings\\<Username>\\My Documents\\Visual Studio 2008\\Projects\\CrypPluginBase\\bin\\Debug'') 33 34and select the library by double clicking the file or pressing the "OK" button. 35 36\begin{figure} 37    %includegraphics... 38    \caption{Figure 3}\label{fig:figure3} 39\end{figure} 40 41Besides the CrypPluginBase you need to add three assembly references to provide the necessary "Windows" namespace for your user control functions called "Presentation" and "QuickWatchPresentation". Select the following .NET components: 42 43\begin{itemize} 44    \item PresentationCore 45    \item PresentationFramework 46    \item WindowsBase 47\end{itemize} 48 49Afterwards your reference tree view should look like this: 50 51[IMAGE] 52 53If your plugin will be based on further libraries, you have to add them in the same way. 54 55 56\section{Create the classes for the algorithm and for its settings}\label{sec:CreateTheClassesForTheAlgorithmAndForItsSettings} 57In the next step we have to create two classes. The first class named "Caesar" has to inherit from IEncryption to provide an ecryption plugin. If you want to develop a Hash plugin your class has to inherit from IHash. 58The second class named "CaesarSettings" has to inherit from ISettings. 59\subsection{Create the class for the algorithm (Caesar)}\label{sec:CreateTheClassForTheAlgorithmCaesar} 60Visual Studio automatically creates a class which has the name "Class1.cs".  There are two ways to change the name to "Caesar.cs": 61 62\hspace{20pt}-Rename the existent class 63 64\hspace{20pt}-Delete the existent class and create a new one. 65 66Which one you choose is up to you. We choose the second way as you can see in the next screenshot: 67 68[IMAGE] 69 70Now make a right click on the project item "Caesar" and select "Add->Class...": 71 72[IMAGE] 73 74Now give your class a unique name. We call the class as mentioned above "Caesar.cs" and make it public to be available to other classes. 75 76[IMAGE] 77 78\subsection{Create the class for the settings (MD5Settings)}\label{sec:CreateTheClassForTheSettingsCaesarSettings} 79Add a second public class for ISettings in the same way. We call the class "CaesarSettings". The settings class provides the necessary information about controls, captions and descriptions and default parameters for e.g. key settings, alphabets, key length and action to build the TaskPane in CrypTool. How a TaskPane could look like you can see below for the example of a Caesar encryption. 80 81[IMAGE] 82\subsection{Add namespace for the class MD5 and the place from where to inherit} 84Now open the "Caesar.cs" file by double clicking on it at the Solution Explorer and include the necessary namespaces to the class header by typing in the according "using" statement. The CrypTool 2.0 API provides the following namespaces: 85 86\hspace{20pt}-Cryptool.PluginBase = interfaces like IPlugin, IHash, ISettings, attributes, enumerations, delegates and extensions. 87 88\hspace{20pt}-Cryptool.PluginBase.Analysis = interface for the crypto analysis plugins like "Stream Comparator" 89 90\hspace{20pt}-Cryptool.PluginBase.Cryptography = interface for all encryption and hash algorithms like AES, DES or MD5 hash 91 92\hspace{20pt}-Cryptool.PluginBase.Editor = interface for editors you want to implement for CrypTool 2.0 like the default editor 93 94\hspace{20pt}-Cryptool.PluginBase.Generator = interface for generators like the random input generator 95 96\hspace{20pt}-Cryptool.PluginBase.IO = interface for CryptoolStream, input and output plugins like text input, file input, text output and file output 97 98\hspace{20pt}-Cryptool.PluginBase.Miscellaneous = provides all event helper like GuiLogMessage or PropertyChanged 99 100\hspace{20pt}-Cryptool.PluginBase.Tool = interface for all foreign tools which CrypTool 2.0 has to provide and which does not exactly support the CrypTool 2.0 API 101 102\hspace{20pt}-Cryptool.PluginBase.Validation = interface which provides method for validation like regular expression 103 104In this case we want to implement a Caesar algorithm which means we need to include the following namespaces: 105 106\hspace{20pt}-"Cryptool.PluginBase" to provide "ISettings" for the CaesarSettings class 107 108\hspace{20pt}-"Cryptool.PluginBase.Cryptography" to provide "IEncryption" for the Caesar class 109 110\hspace{20pt}-"Cryptool.PluginBase.Miscellaneous" to use the entire CrypTool event handler 111 112It is important to define a new default namespace of our public class ("Caesar"). In CrypTool the default namespace is presented by "Cryptool.[name of class]". Therefore our namespace has to be defined as follows: "Cryptool.Caesar". 113 114Up to now the source code should look as you can see below: 115 116[IMAGE] 117 118Next let your class "Caesar" inherit from IHash by inserting of the following statement: 119 120[IMAGE] 121 123There is an underscore at the I in IEncryption statement. Move your mouse over it or place the cursor at it and press "Shift+Alt+F10" and you will see the following submenu: 124 125[IMAGE] 126 127Choose the item "Implement interface 'IEncryption'". Visual Studio will now place all available and needed interface members to interact with the CrypTool core (this saves you also a lot of typing code). 128 129Your code will now look like this: 130 131[IMAGE] 132 133 134 136Let's now take a look at the second class "CaesarSettings" by double clicking at the "CaesarSettings.cs" file at the Solution Explorer. First we also have to include the namespace of "Cryptool.PluginBase" to the class header and let the settings class inherit from "ISettings" analogous as seen before at the Caesar class. Visual Studio will here also automatically place code from the CrypTool interface if available. 137 138[IMAGE] 139 141Now we have to implement some kind of controls (like button, text box) if we need them in the CrypTool \textbf{TaskPane} to modify settings of the algorithm. 142 144Before we go back to the code of the Caesar class, we have to add an icon image to our project, which will be shown in the CrypTool ribbon bar or/and navigation pane. As there is no default, using an icon image is mandatory. 145Note: This will be changed in future. A default icon will be used if no icon image has been provided. 146For testing purposes you may create a simple black and white PNG image with MS Paint or Paint.NET. As image size you can use 40x40 pixels for example, but as the image will be scaled when required, any size should do it. Place the image file in your project directory or in a subdirectory. Then make a right click on the project item "Caesar" within the Solution Explorer, and select "Add$\textendash$$\textgreaterExisting Item...": 147 148[IMAGE] 149 150Then select "Image Files" as file type, and choose the icon for your plugin: 151 152[IMAGE] 153 154Finally we have to set the icon as a "Resource" to avoid providing the icon as a separate file. Make a right click on the icon and select the item "Properties": 155 156[IMAGE] 157 158In the "Properties" panel you have to set the "Build Action" to "Resource" (not embedded resource): 159 160[IMAGE] 161 162 163\section{Set the attributes for the class Caesar}\label{sec:SetTheAttributesForTheClassMD5} 164Now let's go back to the code of the Caesar class ("Caesar.cs" file). First we have to set the necessary attributes for our class. This attributes are used to provide additional information for the Cryptool 2.0 environment. If not set, your plugin won't show up in the GUI, even if everything else is implemented correctly. 165 166Attributes are used for declarative programming and provide meta data, that can be attached to the existing .NET meta data , like classes and properties. Cryptool provides a set of custom attributes, that are used to mark the different parts of your plugin. 167 168\textit{[Author]} 169 170The first attribute called "Author" is optional, which means we are not forced to define this attribute. It provides the additional information about the plugin developer. We set this attribute to demonstrate how it has to look in case you want to provide this attribute. 171 172[IMAGE] 173 174As we can see above the author attribute takes four elements of type string. These elements are: 175 176\hspace{20pt}-Author = name of the plugin developer 177 178\hspace{20pt}-Email = email of the plugin developer if he wants to be contact 179 180\hspace{20pt}-Institute = current employment of the developer like University or Company 181 182\hspace{20pt}-Url = the website or homepage of the developer 183 184All this elements are also optional. The developer decides what he wants to publish. Unused elements shall be set to null or a zero-length string (""). 185Our author attribute should look now as you can see below: 186 187[IMAGE] 188 189\textit{[PluginInfo]} 190The second attribute called "PluginInfo" provides the necessary information about the plugin like caption and tool tip. This attribute is mandatory. The attribute has the definition as you can see below: 191 192[IMAGE] 193 194This attribute expects the following elements: 195 196\hspace{20pt}o\hspace{10pt}startable = 197Set this flag to true only if your plugin is some kind of input or generator plugin (probably if your plugin just has outputs and no inputs). In all other cases use false here. This flag is important. Setting this flag to true for a non input/generator plugin will result in unpredictable chain runs. This element is mandatory. 198 199\hspace{20pt}o\hspace{10pt}caption = 200from type string, the name of the plugin (e.g. to provide the button content). This element is mandatory. 201 202\hspace{20pt}o\hspace{10pt}toolTip = from type string, description of the plugin (e.g. to provide the button tool tip). This element is optional. 203 204\hspace{20pt}o\hspace{10pt}descriptionUrl = from type string, define where to find the whole description files (e.g. XAML files). This element is optional. 205o icons = from type string array, which provides all necessary icon paths you want to use in the plugin (e.g. the plugin icon as seen above). This element is mandatory. 206 207Unused optional elements shall be set to null or a zero-length string (""). 208 209\smallNote 1: It is possible to use the plugin without setting a caption though it is not recommended. This will be changed in future and the plugin will fail to load without a caption. 210 211Note 2: Currently a zero-length toolTip string appears as empty box. This will be changed in future. 212 213Note 3: Tooltip and description currently do not support internationalization and localization. This will be changed in future. 214 215In our example the first parameter called "startable" has to be set to "false", because our hash algorithm is neither an input nor generator plugin. 216 217[IMAGE] 218 219The next two parameters are needed to define the plugin's name and its description: 220 221[IMAGE] 222 223The fourth element defines the location path of the description file. The parameter is made up by \guilsinglleftAssembly name\guilsinglright/\guilsinglleftfilename\guilsinglright or \guilsinglleftAssembly name\guilsinglright/\guilsinglleftPath\guilsinglright/\guilsinglleftfile name\guilsinglright if you want to store your description files in a separate folder. The description file has to be of type XAML. In our case we create a folder called "DetailedDescription" and store our XAML file there with the necessary images if needed. How you manage the files and folders is up to you. This folder could now look as you can see below: 224 225[IMAGE] 226 227Accordingly the attribute parameter has to be set to: 228 229[IMAGE] 230 231The detailed description could now look like this in CrypTool (right click plugin icon on workspace and select "Show description"): 232 233[IMAGE] 234 235The last parameter tells CrypTool the names of the provided icons. This parameter is made up by \guilsinglleftAssembly name\guilsinglright/\guilsinglleftfile name\guilsinglright or \guilsinglleftAssembly name\guilsinglright/\guilsinglleftPath\guilsinglright/\guilsinglleftfile name\guilsinglright. 236 237The most important icon is the plugin icon, which will be shown in CrypTool in the ribbon bar or navigation pane (This is the first icon in list, so you have to provide at least one icon for a plugin). As named above how to add an icon to the solution accordingly we have to tell CrypTool where to find the icon by setting this parameter as you can see below: 238 239[IMAGE] 240 241You can define further icon paths if needed, by adding the path string separated by a comma. 242\section{Set the private variables for the settings in the class Caesar} 243\label{sec:SetThePrivateVariablesForTheSettingsInTheClassMD5} 244The next step is to define some private variables needed for the settings, input and output data which could look like this: 245 246[IMAGE] 247 248Please notice the sinuous line at the type "CryptoolStream" of the variable inputData and the list listCryptoolStreamsOut. "CryptoolStream" is a data type for input and output between plugins and is able to handle large data amounts. To use the CrypTool own stream type, include the namespace "Cryptool.PluginBase.IO" with a "using" statement as explained in chapter 3.3. 249 250The following private variables are being used in this example: 251 252\hspace{20pt}-CaesarSettings settings: required to implement the IPlugin interface properly 253 254\hspace{20pt}-CryptoolStream inputData: stream to read the input data from 255 256\hspace{20pt}-byte[] outputData: byte array to save the output hash value 257 258\hspace{20pt}-List\guilsinglleftCryptoolStream\guilsinglright listCryptoolStreamsOut: list of all streams being created by Caesar plugin, required to perform a clean dispose 259 260 261\section{Define the code of the class Caesar to fit the interface}\label{sec:DefineTheCodeOfTheClassMD5ToFitTheInterface} 262Next we have to complete our code to correctly serve the interface. 263 264First we add a constructor to our class where we can create an instance of our settings class: 265 266[IMAGE] 267 268Secondly, we have to implement the property "Settings" defined in the interface: 269 270[IMAGE] 271 272Thirdly we have to define two properties with their according attributes. This step is necessary to tell Cryptool that these properties are input/output properties used for data exchange with other plugins. 273 274The attribute is named "PropertyInfo" and consists of the following elements: 275 276\hspace{20pt}-direction = defines whether this property is an input or output property, i.e. whether it reads input data or writes output data 277 278\hspace{30pt}o\hspace{10pt}Direction.Input 279 280\hspace{30pt}o\hspace{10pt}Direction.Output 281 282\hspace{20pt}-caption = caption of the property (e.g. shown at the input on the dropped icon in the editor), see below: 283 284[IMAGE] 285 286\hspace{20pt}-toolTip = tooltip of the property (e.g. shown at the input arrow on the dropped icon in the editor), see above 287 288\hspace{20pt}-descriptionUrl = not used right now 289 290\hspace{20pt}-mandatory = this flag defines whether an input is required to be connected by the user. If set to true, there has to be an input connection that provides data. If no input data is provided for mandatory input, your plugin will not be executed in the workflow chain. If set to false, connecting the input is optional. This only applies to input properties. If using Direction.Output, this flag is ignored. 291 292\hspace{20pt}-hasDefaultValue = if this flag is set to true, CrypTool treats this plugin as though the input has already input data. 293 294\hspace{20pt}-DisplayLevel = define in which display levels your property will be shown in CrypTool. CrypTool provides the following display levels: 295 296\hspace{30pt}o\hspace{10pt}DisplayLevel.Beginner 297 298\hspace{30pt}o\hspace{10pt}DisplayLevel.Experienced 299 300\hspace{30pt}o\hspace{10pt}DisplayLevel.Expert 301 302\hspace{30pt}o\hspace{10pt}DisplayLevel.Professional 303 304\hspace{20pt}-QuickWatchFormat = defines how the content of the property will be shown in the quick watch. CrypTool accepts the following quick watch formats: 305 306\hspace{30pt}o\hspace{10pt}QuickWatchFormat.Base64 307 308\hspace{30pt}o\hspace{10pt}QuickWatchFormat.Hex 309 310\hspace{30pt}o\hspace{10pt}QuickWatchFormat.None 311 312\hspace{30pt}o\hspace{10pt}QuickWatchFormat.Text 313 314A quick watch in Hex could look like this: 315 316[IMAGE] 317 318\hspace{20pt}-quickWatchConversionMethod = this string points to a conversion method; most plugins can use a "null" value here, because no conversion is necessary. The QuickWatch function uses system "default" encoding to display data. So only if your data is in some other format, like Unicode or UTF8, you have to provide the name of a conversion method as string. The method header has to look like this: 319object YourMethodName(string PropertyNameToConvert) 320 321First we define the "InputData" property getter and setter: 322 323[IMAGE] 324 325In the getter we check if the input data is not null. If input data is filled, we declare a new CryptoolStream to read the input data, open it and add it to our list where all output stream references are stored. Finally the new stream will be returned. 326 327\smallNote 1: It is currently not possible to read directly from the input data stream without creating an intermediate CryptoolStream. 328 329\smallNote 2: The naming may be confusing. The new CryptoolStream is not an output stream, but it is added to the list of output streams to enable a clean dispose afterwards. See chapter 9 below. 330 331The setter sets the new input data and announces the data to the Cryptool 2.0 environment by using the expression "OnPropertyChanged("\guilsinglleftProperty name\guilsinglright"). For input properties this step is necessary to update the quick watch view. 332 333The output data property could look like this: 334 335[IMAGE] 336 337CrypTool does not require implementing output setters, as they will never be called from outside of the plugin. Nevertheless in this example our plugin accesses the property itself, therefore we chose to implement the setter. 338 339You can also provide additional output data types if you like. For example we provide also an output data of type CryptoolStream: 340 341[IMAGE] 342 343This property's setter is not called and therefore not implemented. 344 345Notice the method "GuiLogMessage" in the source codes above. This method is used to send messages to the CrypTool status bar. This is a nice feature to inform the user what your plugin is currently doing. 346 347[IMAGE] 348 349The method takes two parameters which are: 350 351\hspace{20pt}-Message = will be shown in the status bar and is of type string 352 353\hspace{20pt}-NotificationLevel = to group the messages to their alert level 354 355\hspace{30pt}o\hspace{10pt}NotificationLevel.Error 356 357\hspace{30pt}o\hspace{10pt}NotificationLevel.Warning 358 359\hspace{30pt}o\hspace{10pt}NotificationLevel.Info 360 361\hspace{30pt}o\hspace{10pt}NotificationLevel.Debug 362 363As we can recognize we have two methods named "OnPropertyChanged" and "GuiLogMessage" which are not defined. So we have to define these two methods as you can see below: 364 365[IMAGE] 366 367To use the "PropertyChangedEventHandler" you have to include the namespace "System.ComponentModel". 368 369Our whole included namespaces looks now like this: 370 371[IMAGE] 372 373 374\section{Complete the actual code for the class Caesar}\label{sec:CompleteTheActualCodeForTheClassMD5} 375Up to now, the plugin is ready for the CrypTool base application to be accepted and been shown correctly in the CrypTool menu. What we need now, is the implementation of the actual algorithm in the function "Execute()" which is up to you as the plugin developer. 376 377Let us demonstrate the Execute() function, too. Our algorithm is based on the .NET framework: 378 379[IMAGE] 380 381It is important to make sure that all changes of output properties will be announced to the CrypTool environment. In this example this happens by calling the setter of OutputData which in turn calls "OnPropertyChanged" for both output properties "OutputData" and "OutputDataStream". Instead of calling the property's setter you can as well call "OnPropertyChanged" directly within the "Execute()" method. 382 383Certainly you have seen the unknown method "ProgressChanged" which you can use to show the current algorithm process as a progress on the plugin icon. 384To use this method you also have to declare this method to afford a successful compilation: 385 386[IMAGE] 387 388\section{Perform a clean dispose}\label{sec:PerformACleanDispose} 389Be sure you have closed and cleaned all your streams after execution and when CrypTool decides to dispose the plugin instance. Though not required, we run the dispose code before execution as well: 390 391[IMAGE] 392\section{Finish implementation}\label{sec:FinishImplementation} 393When adding plugin instances to the CrypTool workspace, CrypTool checks whether the plugin runs without any exception. If any IPlugin method throws an exception, CrypTool will show an error and prohibit using the plugin. Therefore we have to remove the "NotImplementedException" from the methods "Initialize()", "Pause()" and "Stop()". In our example it's sufficient to provide empty implementations. 394 395[IMAGE] 396 397The methods "Presentation()" and "QuickWatchPresentation()" can be used if a plugin developer wants to provide an own visualization of the plugin algorithm which will be shown in CrypTool. Take a look at the PRESENT plugin to see how a custom visualization can be realized. For our Caesar example we don't want to implement a custom visualization, therefore we return "null": 398 399[IMAGE] 400 401Your plugin should compile without errors at this point. 402\section{Sign the created plugin}\label{sec:SignTheCreatedPlugin} 403 404\section{Import the plugin to Cryptool and test it}\label{sec:ImportThePluginToCryptoolAndTestIt} 405After you have built the plugin, you need to move the newly created plugin DLL to a location, where CrypTool can find it. To do this, there are the following ways: 406 407\hspace{20pt}1. Copy your plugin DLL file in the folder "CrypPlugins" which has to be in the same folder as the CrypTool executable, called "CrypWin.exe". If necessary, create the folder "CrypPlugins". This folder is called "Global storage" in the CrypTool architecture. Changes in this folder will take effect for all users on a multi user Windows. Finally restart CrypTool. 408 409[IMAGE] 410 411\hspace{20pt}2. Copy your plugin DLL file in the folder "CrypPlugins" which is located in your home path in the folder "ApplicationData" and restart CrypTool. This home folder path is called "Custom storage" in the CrypTool architecture. Changes in this folder will only take effect for current user. On a German Windows XP the home folder path could look like: 412"C:\backslashDokumente und Einstellungen\backslash$$\guilsinglleft$User$\guilsinglright$$\backslashAnwendungsdaten\backslashCrypPlugins" and in Vista the path will look like "C:\backslashUsers\backslash$$\guilsinglleft$user$\guilsinglright$$\backslash$Application Data$\backslash$CrypPlugins". 413 414[IMAGE] 415 416\hspace{20pt}3. You can also import new plugins directly from the CrypTool interface. Just execute CrypWin.exe and select the "Download Plugins" button. An "Open File Dialog" will open and ask where the new plugin is located. After selecting the new plugin, CrypTool will automatically import the new plugin in the custom storage folder. With this option you will not have to restart CrypTool. All according menu entries will be updated automatically. 417Notice, that this plugin importing function only accepts signed plugins. 418 419This option is a temporary solution for importing new plugins. In the future this will be done online by a web service. 420 421\hspace{20pt}4. Use post-build in your project properties to copy the DLL automatically after building it in Visual Studio. Right-click on your plugin project and select "Properties": 422 423[IMAGE] 424 425Select "Build Events": 426 427[IMAGE] 428 429Enter the following text snippet into "Post-build event command line": 430 431cd "\$(ProjectDir)" 432 433cd ..$\backslash$..$\backslash$CrypWin$\backslash$\$(OutDir) 434 435if not exist "./CrypPlugins" mkdir "./CrypPlugins" 436 437del /F /S /Q /s /q "Caesar*.*" 438 439copy "\\$(TargetDir)Caesar*.*" "./CrypPlugins" 440 441You need to adapt the yellow marked field to your actual project name. 442 443\section{Source code and source template}\label{sec:SourceCodeAndSourceTemplate} 444Here you can download the whole source code which was presented in this "Howto" as a Visual Studio solution: 445
2022-01-17 18:45:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20737113058567047, "perplexity": 3558.772256683329}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300616.11/warc/CC-MAIN-20220117182124-20220117212124-00197.warc.gz"}
https://physics.stackexchange.com/questions/588118/does-introducing-a-gauge-field-into-the-complex-scalar-field-theory-lagrangian-c
# Does introducing a gauge field into the complex scalar field theory Lagrangian change its dynamics? I've been reading Lancaster & Blundell, and in Chapter 14 they focus on the Lagrangian $$\mathcal{L}=(\partial^\mu\psi)^\dagger(\partial_\mu\psi) - m^2\psi^\dagger\psi.$$ To impose invariance to the transformation $$\psi\rightarrow\psi\exp(i\alpha(x))$$, where $$\alpha(x)$$ is a coordinate-dependent phase, they replace the derivatives in $$\mathcal{L}$$ with covariant derivatives $$D_\mu = \partial_\mu + iqA_\mu.$$ Invariance then follows if we also admit the transformation $$A_\mu\rightarrow A_\mu-\frac{1}{q}\partial_\mu\alpha(x).$$ Now, my question is a simple one: why are we 'allowed' to change the Lagrangian seemingly arbitrarily? I see how this change leads to the invariance of $$\mathcal{L}$$ with respect to the transformation $$\psi\rightarrow\psi\exp(i\alpha(x))$$, but surely in doing so we change the dynamics of the field $$\psi$$? Expansion of the `new' Lagrangian would seem to suggest that the EL equations do indeed result in different dynamics. • I always didn't like how textbooks explained this... it's not that you're "allowed" to do this or that. Your first Lagrangian represents one theory, adding a gauge field represents a completely different theory. The motivation is that the change is relatively simple, not that the change does nothing. Oct 19 '20 at 7:32 This is indeed true and is what is called the gauge principle. It tells us that if we make a global symmetry local, we need to add a corresponding gauge field such that the total Lagrangian still remains invariant under this local gauge transformation. This is a new dynamical field which has its own equations of motion and can couple to the fermion leading to interactions. In this case the original Lagrangian is invariant under $$U(1)$$ as $$\psi \to \psi e^{i \alpha}$$, note that also $$\partial_\mu \psi \to \partial_\mu \psi e^{i \alpha}$$. We say that these fields transform in the fundamental representation of $$U(1)$$. Now after making our transformation local: $$\alpha \equiv \alpha(x)$$ it's easy to see that $$\partial_\mu \psi \not\to \partial_\mu \psi e^{i \alpha(x)}$$ To account for this as we still want our field to transform in the fundamental representation we have to introduce a gauge field $$A_\mu(x)$$ and a covariant derivative $$\mathcal{D}_\mu$$ such that $$\mathcal{D}_\mu \psi \to \mathcal{D}_\mu \psi e^{i\alpha(x)}$$. This last transformation dictates how $$A_\mu(x)$$ should transform. • I'm not familiar with the physics convention, when you say $\mathcal D_\mu \psi e^{i\alpha(x)}$, do you mean $(\mathcal D_\mu \psi) e^{i\alpha(x)}$ or $\mathcal D_\mu (\psi e^{i\alpha(x)})$? Nov 3 '20 at 20:06 • The second one, $\mathcal{D}_\mu$ acting on everything on the right if there are no brackets. Nov 3 '20 at 20:22 As was mentioned in some of the comments, the Lagrangians $$\mathcal{L}=(\partial^\mu\psi)^\dagger(\partial_\mu\psi)-m^2\psi^\dagger\psi$$ and $$\mathcal{L}=(D^\mu\psi)^\dagger(D_\mu\psi)-m^2\psi^\dagger\psi+\frac{1}{4}F_{\mu\nu}F^{\mu\nu}$$ represent distinct theories each with their own properties. The usual way to motivate the transition from the "ungauged" theory and the "gauged" one is to note that if we want invariance under the transformation $$\psi\rightarrow e^{i\alpha}\psi$$ for $$\alpha=\alpha(x)$$ an arbitrary real function, then taking a Lagrangian that's already invariant in the special case where $$\alpha$$ is a constant and replacing all the derivatives of $$\psi$$ by covariant derivatives $$D_\mu$$, would be good enough to construct a Lagrangian which is also invariant under the local transformations. There is another way of looking at things, however, which may feel a little less ad hoc. Though this viewpoint can be described in terms of this example of $$\psi$$ fields, it's slightly more natural to start with the example of a vector field. So, suppose that $$V^a$$ are the components of some vector field - note that these are only the components. The vector field itself, meaning the abstract object which is invariant under coordinate changes is $$V=V^a\boldsymbol{e}_a$$ where the $$\boldsymbol{e}_a$$ form a basis of vectors at each point in space (technically called frame fields). For example, in two dimensions, we could take $$\boldsymbol{e}_0=\boldsymbol{\hat r}$$ and $$\boldsymbol{e}_1=\boldsymbol{\hat \theta}$$. Now the key assumption is that the physics of our system should not depend on the basis vectors we choose to represent our vector fields in - that is, if we changed to Cartesian unit vectors instead of polar unit vectors, the components $$V^a$$ would certainly need to change, but the object $$V=V^a\boldsymbol{e}_a$$ should not. Since any change in the basis vectors $$\boldsymbol{e}_a$$ will be a (linear) map from a linear space to itself, these can be represented by matrices $$U^a_b$$ so under a change in basis we would have $$\boldsymbol{e}^\prime_a=U^b_a\boldsymbol{e}_b$$. If we are truly to be independent of the basis vectors, we will be able to perform such a transformation point by point, these basis change matrices may have arbitrary dependence on the spacetime point, $$U^a_b=U^a_b(x)$$. In order for $$V$$ to be independent of these changes, the components must transform by the inverse of $$U$$, $$V^{\prime\,a}=U^{-1\, a}_b V^b$$. Finally now, we want to build our Lagrangian out of $$V$$ and its derivatives. So long as our manifold has a metric, we can build arbitrarily high derivatives out of the differential $$d$$ and the Hodge dual $$*$$. If we compute the differential of $$V$$ in terms of the components, we would find $$dV=(dV^a)\boldsymbol{e}_b+V^a(d\boldsymbol{e}_b).$$ The differential of the components is simple because these are all $$0$$-forms (scalars), and so $$d V^a=\partial_\nu V^adx^\nu$$. For the differential of the basis vectors, we can first note that the result must a) be a 1-form b) be some combination of unit vectors again. These two statements together imply the differential must take the generic form $$d\boldsymbol{e}_a=(A_\mu)_a^b\boldsymbol{e}_bdx^\mu$$ where $$A_{\mu\,b}^a$$ is some unknown function, suggestively named. Putting this result back into the calculation of $$dV$$, we find $$dV=\partial_\mu V^a\boldsymbol{e}_adx^\mu+V^aA_{\mu\,a}^b\boldsymbol{e}_bdx^\mu.$$ Collecting the differentials, unit vectors, and components together, this becomes $$dV=\boldsymbol{e}_adx^\mu(\delta^a_b\partial_\mu+A_{\mu\,b}^a)V^b=\boldsymbol{e}_adx^\mu(D_\mu)^a_bV^b.$$ In the last line we have identified the covariant derivative $$D$$. This differs slightly from the covariant derivative in the question by overall scalings of $$A$$ (the $$iq$$) which could have been absorbed into our definition of $$A$$. This expression also differs slightly from what's in the question by the additional indices $$a$$ and $$b$$ floating around. In the case of the complex scalar field, we are not dealing with a vector, but instead some object $$\tilde \psi=\psi z$$ where now $$z$$ is some complex number with $$|z|=1$$. This now plays the role our $$\boldsymbol{e}$$'s played before (but has no indices). Since $$z$$ must have modulus 1, we can only transform to a new $$z$$ by $$z^\prime=e^{iq\alpha}z$$ where $$\alpha=\alpha(x)$$ in the same way the change of basis matrix $$U$$ was allowed to vary point to point (and $$q$$ has been put in for convenience). Since there are no indices on this $$z$$, our calculation of the differential would yield $$d\tilde \psi=dx^\mu zD_\mu\psi=dx^\mu z(\partial_\mu+iqA_\mu)\psi.$$ As a fun side note, observe that if in the example of a vector we renamed $$A$$ to $$\Gamma$$ and called the gauge potential a Christoffel symbol instead, we would immediately reproduce the covariant derivative from general relativity. • You have given an idea of how we can geometrically approach gauge theory. However, this is not an answer to the question. Nov 3 '20 at 20:30 • @NDewolf The question asked had two parts, first whether this Lagrangian is the same, and second why we should be 'allowed' to make this change. I started by answering that the Lagrangian are indeed different which answers the first question. The second part of the question is why such a change is reasonable. My answer is constructed to address this from a geometrical perspective. Nov 3 '20 at 23:00 • The question was specifically related to how this is "allowed" with respect to the dynamics. Your answer is just a geometric way of phrasing invariance. Nov 4 '20 at 7:57
2022-01-18 14:33:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 73, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9089270234107971, "perplexity": 152.94975405678645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300849.28/warc/CC-MAIN-20220118122602-20220118152602-00278.warc.gz"}
https://en.wikipedia.org/wiki/Modified_Dietz_Method
# Modified Dietz method (Redirected from Modified Dietz Method) The modified Dietz method[1][2] is a measure of the historical performance of an investment portfolio in the presence of external flows. (External flows are movements of value such as transfers of cash, securities or other instruments in or out of the portfolio, with no equal simultaneous movement of value in the opposite direction, and which are not income from the investments in the portfolio, such as interest, coupons or dividends.) To calculate the modified Dietz return, divide the gain or loss in value, net of external flows, by the average capital over the period of measurement. The result of the calculation is expressed as a percentage rate of return for the time period. The average capital weights individual cash flows by the amount of time from when those cash flows occur until the end of the period. This method has the practical advantage over Internal Rate of Return (IRR) that it does not require repeated trial and error to get a result.[3] The cash flows used in the formula are weighted based on the time they occurred in the period. For example if they occurred in the beginning of the month they would have a higher weight than if they occurred at the end of the month. This is different from the simple Dietz method, in which the cash flows are weighted equally regardless of when they occurred during the measurement period, which works on an assumption that the flows are distributed evenly throughout the period. With the advance of technology, most systems can calculate a true time-weighted return by calculating a daily return and geometrically linking in order to get a monthly, quarterly, annual or any other period return. However, the modified Dietz method remains useful for performance attribution, because it still has the advantage of allowing modified Dietz returns on assets to be combined with weights in a portfolio, calculated according to average invested capital, and the weighted average gives the modified Dietz return on the portfolio. Time weighted returns do not allow this. This method for return calculation is used in modern portfolio management. It is one of the methodologies of calculating returns recommended by the Investment Performance Council (IPC) as part of their Global Investment Performance Standards (GIPS). The GIPS are intended to provide consistency to the way portfolio returns are calculated internationally.[4] The method is named after Peter O. Dietz.[5] ## Formula The formula for the modified Dietz method is as follows: ${\displaystyle {\cfrac {\text{gain or loss}}{\text{average capital}}}={\cfrac {EMV-BMV-F}{BMV+\sum _{i=1}^{n}W_{i}\times F_{i}}}}$ where ${\displaystyle EMV}$ is the ending market value ${\displaystyle BMV}$ is the beginning market value ${\displaystyle F}$ is the net external inflow for the period (contributions to a portfolio are entered as positive flows while withdrawals are entered as negative flows) and ${\displaystyle \sum _{i=1}^{n}W_{i}\times {F_{i}}=}$ the sum of each flow ${\displaystyle F_{i}}$ multiplied by its weight ${\displaystyle W_{i}}$ The weight ${\displaystyle W_{i}}$ is the proportion of the time period between the point in time when the flow ${\displaystyle F_{i}}$ occurs and the end of the period. Assuming that the flow happens at the end of the day, ${\displaystyle W_{i}}$ can be calculated as ${\displaystyle W_{i}={\frac {C-D_{i}}{C}}}$ where ${\displaystyle C}$ is the number of calendar days during the return period being calculated, which equals end date minus start date (plus 1, unless you adopt the convention that the start date is the same as the end date of the previous period) ${\displaystyle D_{i}}$ is the number of days from the start of the return period until the day on which the flow ${\displaystyle F_{i}}$ occurred. This assumes that the flow happens at the end of the day. If the flow happens at the beginning of the day, use the following formula for calculating weight: ${\displaystyle W_{i}={\frac {C-D_{i}+1}{C}}}$ ## Fees To measure returns net of fees, allow the value of the portfolio to be reduced by the amount of the fees. To calculate returns gross of fees, compensate for them by treating them as an external flow, and exclude accrued fees from valuations. ## Comparison with Time-Weighted Return and Internal Rate of Return The Modified Dietz method has the practical advantage over the true time-weighted rate of return method, in that the calculation of a Modified Dietz return does not require portfolio valuations at each point in time whenever an external flow occurs. The internal rate of return method shares this practical advantage with the Modified Dietz method. The Modified Dietz method has the practical advantage over the internal rate of return method, in that there is a formula for the Modified Dietz return, whereas iterative numerical methods are usually required to estimate the internal rate of return. The Modified Dietz method is based upon a simple rate of interest principle. It approximates the internal rate of return method, which applies a compounding principle, but if the flows and rates of return are large enough, the results of the Modified Dietz method will significantly diverge from the internal rate of return. The Modified Dietz return is the solution ${\displaystyle R}$ to the equation: ${\displaystyle EMV=BMV\times (1+R)+\sum _{i=1}^{n}F_{i}\times (1+R\times {\frac {T-t_{i}}{T}})}$ where ${\displaystyle EMV}$ is the ending market value ${\displaystyle BMV}$ is the beginning market value ${\displaystyle T}$ is the total length of time period and ${\displaystyle t_{i}}$ is the time between the start of the period and flow ${\displaystyle i}$ Compare this with the (unannualized) internal rate of return (IRR). The IRR (or more strictly speaking, an un-annualized holding period return version of the IRR) is a solution ${\displaystyle R}$ to the equation: ${\displaystyle EMV=BMV\times (1+R)+\sum _{i=1}^{n}F_{i}\times (1+R)^{\frac {T-t_{i}}{T}}}$ For example, suppose the value of a portfolio is 100 USD at the beginning of the first year, and 300 USD at the end of the second year, and there is an inflow of 50 USD at the end of the first year/beginning of the second year. (Suppose further that neither year is a leap year, so the two years are of equal length.) To calculate the gain or loss over the two-year period, ${\displaystyle {\text{gain or loss}}=EMV-BMV-F=300-100-50=150{\text{USD.}}}$ To calculate the average capital over the two-year period, ${\displaystyle {\text{average capital}}=BMV+\sum Weight\times Flow=100+0.5\times 50=125{\text{USD,}}}$ so the Modified Dietz return is: ${\displaystyle {\frac {\text{Gain or loss}}{\text{average capital}}}={\frac {150}{125}}=120\%}$ The (unannualized) internal rate of return in this example is 125%: ${\displaystyle 300=100\times (1+125\%)+50\times (1+125\%)^{\frac {2-1}{2}}=225+50\times 150\%=225+75}$ so in this case, the Modified Dietz return is noticeably less than the unannualized IRR. This divergence between the Modified Dietz return and the unannualized internal rate of return is due to a significant flow within the period, and the fact that the returns are large. ## Annual Rate of Return Note that the Modified Dietz return is not an annual rate of return, unless the period happens to be one year. Annualisation, which is conversion of the return to an annual rate of return, is a separate process. ## The Simple Dietz Method Note also that the simple Dietz method is a special case of the Modified Dietz method, in which external flows are assumed to occur at the midpoint of the period, or equivalently, spread evenly throughout the period, whereas no such assumption is made when using the Modified Dietz method, and the timing of any external flows is taken into account. ## Money-Weighted Return The Modified Dietz method is an example of a money (or dollar) weighted methodology. In particular, if the Modified Dietz return on two portfolios are ${\displaystyle R_{1}}$ and ${\displaystyle R_{2}}$, measured over a common matching time interval, then the Modified Dietz return on the two portfolios put together over the same time interval is the weighted average of the two returns: ${\displaystyle W_{1}\times R_{1}+W_{2}\times R_{2}}$ where the weights of the portfolios depend on the Average Capital over the time interval: ${\displaystyle W_{i}={\frac {{\text{Average Capital}}_{i}}{{\text{Average Capital}}_{1}+{\text{Average Capital}}_{2}}}}$ ## Linked Return versus True Time-Weighted Return An alternative to the Modified Dietz method is to link geometrically the Modified Dietz returns for shorter periods. The linked Modified Dietz method is classed as a time-weighted method, but it does not produce the same results as the true time weighted method, which requires valuations at the time of each cash flow. ## Issues There are sometimes difficulties when calculating or decomposing portfolio returns, if all transactions are treated as occurring at a single point during the day. Whatever method is applied to calculate returns, an assumption that all transactions take place simultaneously at a single point in time each day can lead to errors. For example, consider a scenario where a portfolio is empty at the start of a day, so that BMV = 0. There is then an external inflow during a day of F = $100. By the close of the day, market prices have moved, and EMV =$99. If all transactions are treated as occurring at the end of the day, then there is zero start value BMV, and zero value for Average Capital, so no Modified Dietz return can be calculated. Some such problems are resolved if the Modified Dietz method is further adjusted so as to put purchases at the open and sales at the close, but more sophisticated exception-handling produces better results. There are sometimes other difficulties when decomposing portfolio returns, if all transactions are treated as occurring at a single point during the day. For example, consider a fund opening with just $100 of a single stock that is sold for$110 during the day. During the same day, another stock is purchased for $110, closing with a value of$120. The returns on each stock are 10% and 120/110 - 1 = 9.0909% (4 d.p.) and the portfolio return is 20%. The asset weights wi (as opposed to the time weights Wi) required to get the returns for these two assets to roll up to the portfolio return are 1200% for the first stock and a negative 1100% for the second: w*10/100 + (1-w)*10/110 = 20/100 → w = 12. Such weights are absurd, because the second stock is not held short. Function georet_MD(myDates, myReturns, FlowMap, scaler) ' This function calculates the modified Dietz return of a time series ' ' Inputs. ' myDates. Tx1 vector of dates ' myReturns. Tx1 vector of financial returns ' FlowMap. Nx2 matrix of Dates (left column) and flows (right column) ' scaler. Scales the returns to the appropriate frequency ' ' Outputs. ' Modified Dietz Returns. ' ' Note that all the dates of the flows need to exist in the date vector that is provided. ' when a flow is entered, it only starts accumulating after 1 period. ' Dim i, j, T, N As Long Dim matchFlows(), Tflows(), cumFlows() As Double Dim np As Long Dim AvFlows, TotFlows As Double ' Get dimensions If StrComp(TypeName(myDates), "Range") = 0 Then T = myDates.Rows.Count Else T = UBound(myDates, 1) End If If StrComp(TypeName(FlowMap), "Range") = 0 Then N = FlowMap.Rows.Count Else N = UBound(FlowMap, 1) End If ' Redim arrays ReDim cumFlows(1 To T, 1 To 1) ReDim matchFlows(1 To T, 1 To 1) ReDim Tflows(1 To T, 1 To 1) ' Create a vector of Flows For i = 1 To N j = Application.WorksheetFunction.Match(FlowMap(i, 1), myDates, True) matchFlows(j, 1) = FlowMap(i, 2) Tflows(j, 1) = 1 - (FlowMap(i, 1) - FlowMap(1, 1)) / (myDates(T, 1) - FlowMap(1, 1)) If i = 1 Then np = T - j Next i ' Cumulated Flows For i = 1 To T If i = 1 Then cumFlows(i, 1) = matchFlows(i, 1) Else cumFlows(i, 1) = cumFlows(i - 1, 1) * (1 + myReturns(i, 1)) + matchFlows(i, 1) End If Next i AvFlows = Application.WorksheetFunction.SumProduct(matchFlows, Tflows) TotFlows = Application.WorksheetFunction.Sum(matchFlows) georet_MD = (1 + (cumFlows(T, 1) - TotFlows) / AvFlows) ^ (scaler / np) - 1 End Function ## Java Method for Modified Dietz Return private static double modifiedDietz (double emv, double bmv, double cashFlow[], int numCD, int numD[]) { /* emv: Ending Market Value * bmv: Beginning Market Value * cashFlow[]: Cash Flow * numCD: actual number of days in the period * numD[]: number of days between beginning of the period and date of cashFlow[] */ double md = -99999; // initialize modified dietz with a debugging number try { double[] weight = new double[cashFlow.length]; if (numCD <= 0) { throw new ArithmeticException ("numCD <= 0"); } for (int i=0; i<cashFlow.length; i++) { if (numD[i] < 0) { throw new ArithmeticException ("numD[i]<0 , " + "i=" + i); } weight[i] = (double) (numCD - numD[i]) / numCD; } double ttwcf = 0; // total time weighted cash flows for (int i=0; i<cashFlow.length; i++) { ttwcf += weight[i] * cashFlow[i]; } double tncf = 0; // total net cash flows for (int i=0; i<cashFlow.length; i++) { tncf += cashFlow[i]; } md = (emv - bmv - tncf) / (bmv + ttwcf); } catch (ArrayIndexOutOfBoundsException e) { e.printStackTrace(); } catch (ArithmeticException e) { e.printStackTrace(); } catch (Exception e) { e.printStackTrace(); } return md; } ## References 1. ^ Peter O. Dietz (1966). Pension Funds: Measuring Investment Performance. Free Press. 2. ^ Philip Lawton, CIPM; Todd Jankowski, CFA (18 May 2009). Investment Performance Measurement: Evaluating and Presenting Results. John Wiley & Sons. pp. 828–. ISBN 978-0-470-47371-9. Peter O. Dietz published his seminal work, Pension Funds: Measuring Investment Performance, in 1966. The Bank Administration Institute (BAI), a U.S.-based organization serving the financial services industry, subsequently formulated rate-of-return calculation guidelines based on Dietz's work. 3. ^ Bruce J. Feibel (21 April 2003). Investment Performance Measurement. John Wiley & Sons. pp. 41–. ISBN 978-0-471-44563-0. One of these return calculation methods, the Modified Dietz method, is still the most common way of calculating periodic investment returns. 4. ^ "Global Investment Performance Standards (GIPS®) Guidance Statement on Calculation Methodology" (pdf). IPC. Retrieved 13 January 2015. 5. ^ The C.F.A. Digest. 32-33. Institute of Chartered Financial Analysts. 2002. p. 72. A slightly improved version of this method is the day-weighted, or modified Dietz, method. This method adjusts the cash flow by a factor that corresponds to the amount of time between the cash flow and the beginning of the period.
2016-07-28 05:56:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 32, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5375858545303345, "perplexity": 2011.9278399801128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257827791.21/warc/CC-MAIN-20160723071027-00307-ip-10-185-27-174.ec2.internal.warc.gz"}
https://avidemia.com/pure-mathematics/real-numbers/
We have confined ourselves so far to certain sections of the positive rational numbers, which we have agreed provisionally to call ‘positive real numbers.’ Before we frame our final definitions, we must alter our point of view a little. We shall consider sections, or divisions into two classes, not merely of the positive rational numbers, but of all rational numbers, including zero. We may then repeat all that we have said about sections of the positive rational numbers in §§ 6, 7, merely omitting the word positive occasionally. Definitions. A section of the rational numbers, in which both classes exist and the lower class has no greatest member, is called a real number, or simply a number. A real number which does not correspond to a rational number is called an irrational number. If the real number does correspond to a rational number, we shall use the term ‘rational’ as applying to the real number also. The term ‘rational number’ will, as a result of our definitions, be ambiguous; it may mean the rational number of § 1, or the corresponding real number. If we say that $$\frac{1}{2} > \frac{1}{3}$$, we may be asserting either of two different propositions, one a proposition of elementary arithmetic, the other a proposition concerning sections of the rational numbers. Ambiguities of this kind are common in mathematics, and are perfectly harmless, since the relations between different propositions are exactly the same whichever interpretation is attached to the propositions themselves. From $$\frac{1}{2} > \frac{1}{3}$$ and $$\frac{1}{3} > \frac{1}{4}$$ we can infer $$\frac{1}{2} > \frac{1}{4}$$; the inference is in no way affected by any doubt as to whether $$\frac{1}{2}$$, $$\frac{1}{3}$$, and $$\frac{1}{4}$$ are arithmetical fractions or real numbers. Sometimes, of course, the context in which (e.g.) ‘$$\frac{1}{2}$$’ occurs is sufficient to fix its interpretation. When we say (see § 9) that $$\frac{1}{2} < \sqrt{\frac{1}{3}}$$, we must mean by ‘$$\frac{1}{2}$$’ the real number $$\frac{1}{2}$$. The reader should observe, moreover, that no particular logical importance is to be attached to the precise form of definition of a ‘real number’ that we have adopted. We defined a ‘real number’ as being a section, a pair of classes. We might equally well have defined it as being the lower, or the upper, class; indeed it would be easy to define an infinity of classes of entities each of which would possess the properties of the class of real numbers. What is essential in mathematics is that its symbols should be capable of some interpretation; generally they are capable of many, and then, so far as mathematics is concerned, it does not matter which we adopt. Mr Bertrand Russell has said that ‘mathematics is the science in which we do not know what we are talking about, and do not care whether what we say about it is true’, a remark which is expressed in the form of a paradox but which in reality embodies a number of important truths. It would take too long to analyse the meaning of Mr Russell’s epigram in detail, but one at any rate of its implications is this, that the symbols of mathematics are capable of varying interpretations, and that we are in general at liberty to adopt whichever we prefer. There are now three cases to distinguish. It may happen that all negative rational numbers belong to the lower class and zero and all positive rational numbers to the upper. We describe this section as the real number zero. Or again it may happen that the lower class includes some positive numbers. Such a section we describe as a positive real number. Finally it may happen that some negative numbers belong to the upper class. Such a section we describe as a negative real number.1 The difference between our present definition of a positive real number $$a$$ and that of § 7 amounts to the addition to the lower class of zero and all the negative rational numbers. An example of a negative real number is given by taking the property $$P$$ of § 6 to be $$x + 1 < 0$$ and $$Q$$ to be $$x + 1 \geq 0$$. This section plainly corresponds to the negative rational number $$-1$$. If we took $$P$$ to be $$x^{3} < -2$$ and $$Q$$ to be $$x^{3} > -2$$, we should obtain a negative real number which is not rational. 1. There are also sections in which every number belongs to the lower or to the upper class. The reader may be tempted to ask why we do not regard these sections also as defining numbers, which we might call the real numbers positive and negative infinity.There is no logical objection to such a procedure, but it proves to be inconvenient in practice. The most natural definitions of addition and multiplication do not work in a satisfactory way. Moreover, for a beginner, the chief difficulty in the elements of analysis is that of learning to attach precise senses to phrases containing the word ‘infinity’; and experience seems to show that he is likely to be confused by any addition to their number.↩︎
2021-05-14 12:24:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7445807456970215, "perplexity": 281.3481091681609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989526.42/warc/CC-MAIN-20210514121902-20210514151902-00066.warc.gz"}
https://christopherolah.wordpress.com/2010/03/28/the-mandelbrot-set-compact/
## The Mandelbrot Set: Compact? Several weeks ago, I read something on Wikipedia that shocked me: “The Mandelbrot set is a compact set. At first I didn’t believe it. How could the Mandelbrot set, in its infinite complexity, be compact? ### Attempt 1: Prove False — Failed My first attempt to disprove this was very naive: I’d prove it was Lindelöf by putting an open set around each one of the seahorses in seahorse valley, thereby constructing an open cover with only countable subcover. That something is wrong is immediately clear if we carry this to its natural conclusion: we could put an open cover around each `sprout’ on each seahorse, and recurse indefinitely, creating an open cover with a $\aleph_0^{\aleph_0}$ subcover. The Mandelbrot set is a subset of the complex plane so it is clear that this is nonsense. What is wrong? We have to put an open set around the main component of all of these. It will contain some of the off shoots. We can have an arbitrarily large finite number outside it, but that is very different from infinity…. So that argument doesn’t work. ### Attempt 2: Prove False — Failed The second attempt arose from consider the definition of the Mandelbrot set: all points $x$ that don’t diverge under $\lim_{n\to\infty} (z\to z^2+x)^n$ If the Mandelbrot set, $M$ is compact, it’s image in $(z\to z^2+x)^n$ is compact for any finite $n$ (by the continuity of finite compositions of a continuous function). But this can’t be compact, since the set contains all numbers that don’t diverge to infinity… This is the mistake, right here. It essentially arises from me misunderstanding the Mandelbrot set. At this point, if you had asked me to define the Mandelbrot set, I would have written, and have written several times on this site (sorry!) $\{x | x\in\mathbb{C}; \lim_{n\to\infty}\left|(z\to z^2+x)^n(x)\right|\neq\infty\}$. But this isn’t correct. It wasn’t even what I was thinking. In my head, I was thinking $\{x | x\in\mathbb{C}; \left|(z\to z^2+x)^{\aleph_0}(x)\right|\neq\infty\}$ which is also wrong, but at least intelligible. In the previous one, the set would have been all of real numbers because applying the function a finite number of times (even a very large, approaches infinity number of times) won’t reach infinity, just get arbitrarily close. When we iterate a countable infinity number of times, we can at least reach an infinite magnitude and thus have something like the Mandelbrot Set. Attempt 3: Prove True — Success Failed So what is the correct definition of the Mandelbrot Set? $\{x | x\in\mathbb{C}; \lim_{n\to\infty} |(z\to z^2+x)^n(x)| \leq 2 \}$ Why? Because once the magnitude exceeds $2$, it must escape. Even in the worst case $z$ and $x$ are in opposite directions, $2$ is the turning point because $2^2-2=2$. (This is why the Mandelbrot Set is bound in the $2$ unit circle.) Using this definition, our proof works. The Mandelbrot Set is compact iff it’s image in $x\to\lim_{n\to\infty}(z\to z^2+x)^n(x)$ is compact. By definition, we select that set as $\{x | x\in\mathbb{C}; |x|\leq 2\}$ which is compact since it is closed and bounded (Bolzano–Weierstrass theorem). QED. Update: Or not. There are a number of problems with this proof that I didn’t see when I made it. I blame the fact that I was very new to topology at the time. The first one is that the implication for compactness goes the wrong way, ie. the image of a compact set must be compact, but the reverse is not necessarily true. If this was the only problem, however, we could save our proof by using the fact that the inverse image of a closed set under a continuous function is closed along with the fact that we know the Mandelbrot set is bounded. The second is that the composition of a non-finite number number of continuous functions is not necessarily continuous (trivial example, think about what would happened if squared over and over again: the limit would exist but it would not be continuous). In order to get continuity we would need uniform convergence, which we don’t have. In fact, the limit of the function does not even exist in this case (because of stable cycles), nor even necessarily a convergent subsequence. The real proof is very much non-trivial. ### Corollary: The Mandelbrot Set is Closed Again, Bolzano–Weierstrass theorem.
2017-05-23 16:54:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8711870312690735, "perplexity": 246.37209233151333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607648.39/warc/CC-MAIN-20170523163634-20170523183634-00170.warc.gz"}
https://zbmath.org/?q=ai%3Aho.ky+se%3A00000132
# zbMATH — the first resource for mathematics On the eigenvalue problem involving the weighted $$p$$-Laplacian in radially symmetric domains. (English) Zbl 1401.35233 Summary: We investigate the following eigenvalue problem $\begin{cases} -\operatorname{div}(L(x) |\nabla u |^{p - 2}\nabla u) = \lambda K(x) | u |^{p - 2} u \quad &\text{in } A_{R_1}^{R_2}, \\ u = 0 & \text{on } \partial A_{R_1}^{R_2}, \end{cases}$ where $$A_{R_1}^{R_2} : = \{x \in \mathbb R^N : R_1 < | x | < R_2 \}$$ $$(0 < R_1 < R_2 \leq \infty)$$, $$\lambda > 0$$ is a parameter, the weights $$L$$ and $$K$$ are measurable with $$L$$ positive a.e. in $$A_{R_1}^{R_2}$$ and $$K$$ possibly sign-changing in $$A_{R_1}^{R_2}$$. We prove the existence of the first eigenpair and discuss the regularity and positiveness of eigenfunctions. The asymptotic estimates for $$u(x)$$ and $$\nabla u(x)$$ as $$| x | \rightarrow R_1^+$$ or $$R_2^-$$ are also investigated. ##### MSC: 35P30 Nonlinear eigenvalue problems and nonlinear spectral theory for PDEs 35J62 Quasilinear elliptic equations 35B40 Asymptotic behavior of solutions to PDEs 35B50 Maximum principles in context of PDEs 35J20 Variational methods for second-order elliptic equations 35J25 Boundary value problems for second-order elliptic equations 35P15 Estimates of eigenvalues in context of PDEs Full Text: ##### References: [1] Agudelo, O.; Drábek, P., Anisotropic semipositone quasilinear problems, J. Math. Anal. Appl., 452, 2, 1145-1167, (2017) · Zbl 1373.35147 [2] Anoop, T. V.; Drábek, P.; Sankar, L.; Sasi, S., Antimaximum principle in exterior domains, Nonlinear Anal., 130, 241-254, (2016) · Zbl 1329.35158 [3] Anoop, T. V.; Drábek, P.; Sasi, S., Weighted quasilinear eigenvalue problems in exterior domains, Calc. Var. Partial Differential Equations, 53, 3-4, 961-975, (2015) · Zbl 1333.35138 [4] Autuori, G.; Colasuonno, F.; Pucci, P., On the existence of stationary solutions for higher-order p-Kirchhoff problems, Commun. Contemp. Math., 16, (2014) · Zbl 1325.35129 [5] Chhetri, M.; Drábek, P., Principal eigenvalue of p-Laplacian operator in exterior domain, Results Math., 66, 3-4, 461-468, (2014) · Zbl 1327.35285 [6] Colasuonnoa, F.; Pucci, P.; Varga, C., Multiple solutions for an eigenvalue problem involving p-Laplacian type operators, Nonlinear Anal., 75, 4496-4512, (2012) · Zbl 1251.35059 [7] DiBenedetto, E., $$C^{1 + \alpha}$$ local regularity of weak solutions of degenerate elliptic equations, Nonlinear Anal., 7, 8, 827-850, (1983) · Zbl 0539.35027 [8] Drábek, P.; Kufner, A.; Kuliev, K., Half-linear Sturm-Liouville problem with weights: asymptotic behavior of eigenfunctions, Proc. Steklov Inst. Math., 284, 148-154, (2014) · Zbl 1319.34151 [9] Drábek, P.; Kufner, A.; Nicolosi, F., Quasilinear elliptic equations with degenerations and singularities, De Gruyter Series in Nonlinear Analysis and Applications, vol. 5, (1997), Walter de Gruyter and Co. Berlin · Zbl 0894.35002 [10] Drábek, P.; Kuliev, K., Half-linear Sturm-Liouville problem with weights, Bull. Belg. Math. Soc. Simon Stevin, 19, 107-119, (2012) · Zbl 1252.34034 [11] Evans, L. C.; Gariepy, R. F., Measure theory and fine properties of functions, (1992), CRC Press · Zbl 0804.28001 [12] Ho, K.; Sim, I., Corrigendum to “existence and some properties of solutions for degenerate elliptic equations with exponent variable” [nonlinear anal. 98 (2014) 146-164], Nonlinear Anal., 128, 423-426, (2015) [13] Kawohl, B.; Lucia, M.; Prashanth, S., Simplicity of the principal eigenvalue for indefinite quasilinear problems, Adv. Difference Equ., 12, 4, 407-434, (2007) · Zbl 1158.35069 [14] Ladyzhenskaya, O. A.; Ural’tseva, N. N., Linear and quasilinear elliptic equations, (1968), Acad. Press · Zbl 0164.13002 [15] Le, V.; Schmitt, K., On boundary value problems for degenerate quasilinear elliptic equations and inequalities, J. Differential Equations, 144, 170-218, (1998) · Zbl 0912.35069 [16] Lê, A.; Schmitt, K., Variational eigenvalues of degenerate eigenvalue problems for the weighted p-Laplacian, Adv. Nonlinear Stud., 5, 4, 573-585, (2005) · Zbl 1210.35175 [17] Lieberman, G. M., Boundary regularity for solutions of degenerate elliptic equations, Nonlinear Anal., 12, 11, 1203-1219, (1988) · Zbl 0675.35042 [18] Mitidieri, E.; Pohozaev, S. I., A priori estimates and the absence of solutions of nonlinear partial differential equations and inequalities, Tr. Mat. Inst. Steklova, Proc. Steklov Inst. Math., 234, 1-362, (2001), (in Russian). Translation in · Zbl 1074.35500 [19] Montefusco, E.; Radulescu, V., Nonlinear eigenvalue problems for quasilinear operators on unbounded domains, NoDEA Nonlinear Differential Equations Appl., 8, 481-497, (2001) · Zbl 1001.35094 [20] Murthy, V.; Stampacchia, G., Boundary value problems for some degenerate-elliptic operators, Ann. Mat. Pura Appl., 80, 1-122, (1968) · Zbl 0185.19201 [21] Opic, B.; Kufner, A., Hardy-type inequalities, Pitman Research Notes in Mathematics Series, vol. 279, (1990), Longman Scientific and Technical Harlow · Zbl 0698.26007 [22] Perera, K.; Pucci, P.; Varga, C., An existence result for a class of quasilinear elliptic eigenvalue problems in unbounded domains, NoDEA Nonlinear Differential Equations Appl., 21, 3, 441-451, (2014) · Zbl 1296.35108 [23] Pucci, P.; Serrin, J., The strong maximum principle revisited, J. Differential Equations, 196, 1, 1-66, (2004) · Zbl 1109.35022 [24] Wang, Y.-Z.; Li, H.-Q., Lower bound estimates for the first eigenvalue of the weighted p-Laplacian on smooth metric measure spaces, Differential Geom. Appl., 45, 23-42, (2016) · Zbl 1334.58020 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-01-26 04:02:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8863390684127808, "perplexity": 4816.43754009387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704795033.65/warc/CC-MAIN-20210126011645-20210126041645-00050.warc.gz"}
https://manpag.es/YDL61/1+grodvi
pdf # GRODVI ## NAME grodvi − convert groff output to TeX dvi format ## SYNOPSIS grodvi [ −dv ] [ −wn ] [ −Fdir ] [ files... ] It is possible to have whitespace between a command line option and its parameter. ## DESCRIPTION grodvi is a driver for groff that produces TeX dvi format. Normally it should be run by groff −Tdvi. This will run troff −Tdvi; it will also input the macros /usr/share/groff/1.18.1.1/tmac/dvi.tmac; if the input is being preprocessed with eqn it will also input /usr/share/groff/1.18.1.1/font/devdvi/eqnchar. The dvi file generated by grodvi can be printed by any correctly-written dvi driver. The troff drawing primitives are implemented using the tpic version 2 specials. If the driver does not support these, the \D commands will not produce any output. There is an additional drawing command available: \D’R dh dv Draw a rule (solid black rectangle), with one corner at the current position, and the diagonally opposite corner at the current position +(dh,dv). Afterwards the current position will be at the opposite corner. This produces a rule in the dvi file and so can be printed even with a driver that does not support the tpic specials unlike the other \D commands. The groff command \X’anything is translated into the same command in the dvi file as would be produced by \special{anything} in TeX; anything may not contain a newline. For inclusion of EPS image files, grodvi loads pspic.tmac automatically, providing the PSPIC macro. Please check grops (1) for a detailed description of this macro. Font files for grodvi can be created from tfm files using tfmtodit(1). The font description file should contain the following additional commands: internalname name The name of the tfm file (without the .tfm extension) is name. checksum n The checksum in the tfm file is n. designsize n The designsize in the tfm file is n. These are automatically generated by tfmtodit. The default color for \m and \M is black. Currently, the drawing color for \D commands is always black, and fill color values are translated to gray. In troff the \N escape sequence can be used to access characters by their position in the corresponding tfm file; all characters in the tfm file can be accessed this way. ## OPTIONS −d Do not use tpic specials to implement drawing commands. Horizontal and vertical lines will be implemented by rules. Other drawing commands will be ignored. −v Print the version number. −wn Set the default line thickness to n thousandths of an em. If this option isn’t specified, the line thickness defaults to 0.04 em. −Fdir Prepend directory dir/devname to the search path for font and device description files; name is the name of the device, usually dvi. ## USAGE There are styles called R, I, B, and BI mounted at font positions 1 to 4. The fonts are grouped into families T and H having members in each of these styles: TR CM Roman (cmr10) TI CM Text Italic (cmti10) TB CM Bold Extended Roman (cmbx10) TBI CM Bold Extended Text Italic (cmbxti10) HR CM Sans Serif (cmss10) HI CM Slanted Sans Serif (cmssi10) HB CM Sans Serif Bold Extended (cmssbx10) HBI CM Slanted Sans Serif Bold Extended (cmssbxo10) There are also the following fonts which are not members of a family: CW CM Typewriter Text (cmtt10) CWI CM Italic Typewriter Text (cmitt10) Special fonts are MI (cmmi10), S (cmsy10), EX (cmex10), and, perhaps surprisingly, TR, TI, and CW, due to the different font encodings of text fonts. For italic fonts, CWI is used instead of CW. Finally, the symbol fonts of the American Mathematical Society are available as special fonts SA (msam10) and SB (msbm10). These two fonts are not mounted by default. Using the option −mec (loading the file ec.tmac) EC and TC fonts are used. The design of the EC family is very similar to that of the CM fonts; additionally, they give a much better coverage of groff symbols. Note that ec.tmac must be called before any language-specific files; it doesn’t take care of hcode values. ## FILES /usr/share/groff/1.18.1.1/font/devdvi/DESC Device description file. /usr/share/groff/1.18.1.1/font/devdvi/F Font description file for font F. /usr/share/groff/1.18.1.1/tmac/dvi.tmac Macros for use with grodvi. /usr/share/groff/1.18.1.1/tmac/ec.tmac Macros to switch to EC fonts. ## BUGS Dvi files produced by grodvi use a different resolution (57816 units per inch) to those produced by TeX. Incorrectly written drivers which assume the resolution used by TeX, rather than using the resolution specified in the dvi file will not work with grodvi. When using the −d option with boxed tables, vertical and horizontal lines can sometimes protrude by one pixel. This is a consequence of the way TeX requires that the heights and widths of rules be rounded.
2020-12-04 07:20:40
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.89080810546875, "perplexity": 6994.353725551302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141735395.99/warc/CC-MAIN-20201204071014-20201204101014-00415.warc.gz"}
https://codegolf.stackexchange.com/questions/137743/hello-world-robbers-thread
Your challenge is to take an uncracked submission from the cops' thread and find, for what input or inputs, the program will print Hello, World! and a newline. Capitalization, spacing, and punctuation must be exact. Please comment on the cop's submission when you've cracked their code. # Bash by Sisyphus Original code: [[ ! "${1////x}" =~ [[:alnum:]] ]]&&[[$# = 1 ]]&&bash -c "$1" Input: __=; (( __++, __-- )); (( ___ = __, ___++)); (( ____ = ___, ____++)); (( _____ = ____, _____++)); (( ______ = _____, ______++)); (( _______ = ______, _______++)); (( ________ = _______, ________++)); (( _________ = ________, _________++));${!__##-} <<<$\'\$___$______$_______\$______________\$___$_______$__\$___________________\'\$\'\$________\$___$______$_______\$________________\$___$_______$______\$___________________,\ \$___$____$_________\$___________________\$___$________$____\$________________\$___$______$______\\$__$______$___\' Explanation: This encodes "echo Hello, World!" as octal escape sequences (\xxx). Except you also can't use numbers, so the first part builds up variables for the numbers 0-7. You can use those to build a string with octal sequences that will evaluate to give you the actual command. But eval is also alphanumeric. So instead this pipes that string as input to another instance of bash. $0 contains the name of the command used to invoke Bash, which is usually just bash (or -bash for a login shell) if you're running it normally (through TIO or by pasting it in a terminal). (This incidentally means that if you try to run this by pasting it into a script, things will go horribly wrong as it tries to fork itself a bunch of times.) But anyway, you can't say $0 directly. Instead, $__ contains the name of $0 ("0"), and you can use indirect expansion to access it (${!__} refers to the contents of $0). And that, finally, gives you all the pieces you need. • Welcome to Programming Puzzles & Code Golf! I've left a comment on the cop answer. TIO link for the interested: tio.run/##jc/… – Dennis Aug 7 '17 at 7:01 • Nice! My solution looked different, but used the same idea - constructing an octal string and using the dollar single quote syntax, and using indirection. Well done =) – Sisyphus Aug 7 '17 at 7:40 # 05AB1E, Adnan •GG∍Mñ¡÷dÖéZ•2ô¹βƵ6B -107 Try it online! ### How? •GG∍Mñ¡÷dÖéZ• - big number 2ô - split into twos (base-10-digit-wise) ¹β - to base of input B - convert to base (using 012...ABC...abc...): Ƶ6 - 107 (ToBase10(FromBase255(6))+101 = 6+101 = 107) • Negative base...pretty... – Erik the Outgolfer Aug 5 '17 at 20:25 # totallyhuman, Python 2 A solution is 'Uryyb, Jbeyq!' • Explanation? ---- – MD XF Aug 5 '17 at 18:08 # Pyth: Mr. Xcoder "abcdefghijklmnopqrstuvwxyz" G is the built-in for the lowercase alphabet. The code checks for equality against that. • Nice crack :))) – Mr. Xcoder Aug 5 '17 at 18:19 # Jelly: EriktheOutgolfer 〡㋄ⶐ✐сᑀ⟙ⶐⶐ〡ސЀᶑ〡㋄ⶐ✐сᑀ⟙ⶐⶐ〡ސЀᶑ〡㋄ⶐ✐сᑀ⟙ⶐⶐ〡ސЀᶑ〡㋄ⶐ✐сᑀ⟙ⶐⶐ〡ސЀᶑ〡㋄ⶐ✐сᑀ⟙ⶐⶐ〡ސЀᶑ〡㋄ⶐ✐сᑀ⟙ⶐⶐ〡ސЀᶑ〡㋄ⶐ✐сᑀ⟙ⶐⶐ〡ސЀᶑ〡㋄ⶐ✐сᑀ⟙ⶐⶐ〡ސЀᶑ〡㋄ⶐ✐сᑀ⟙ⶐⶐ〡ސЀᶑ〡㋄ⶐ✐сᑀ⟙ⶐⶐ〡ސЀᶑ〡㋄ⶐ✐сᑀ⟙ⶐⶐ〡ސЀᶑ〡㋄ⶐ✐сᑀ⟙ⶐⶐ〡ސЀᶑ〡㋄ⶐ✐сᑀ⟙ⶐⶐ〡ސЀᶑ # Jelly, 11 bytes sLƽ$Xṙ5O½Ọ Try it online! # Explanation The original code can be explained as such: sLƽ$Xṙ5O½Ọ Main link; argument is z s Split z into n slices, where n is:$ ƽ The integer square root of L the length of z X Random of a list. Returns a random row of the input put into a square ṙ5 Rotate the list left 5 times O Get codepoints ½ Floating-point square root Ọ From codepoints So, just take "Hello, World!" and get codepoints, square them, cast back to codepoints, rotate right 5 times, and then square and flatten the result. # Octave by Stewie Griffin Not sure if this is the correct solution (on TIO it prints the \00 character), but in my octave-cli shell it looks like this: Also in the original challenge it says print nothing (or the null-character), so if nothing is the same as \00 then this should be fine. [72, 101, 108, 108, 111, 44, 32, 87, 111, 114, 108, 100, 33, 0] Try it online! • But that's not what the challenge looks like, if so this would me much easier (replace the last 0 with a 10). – ბიმო Aug 5 '17 at 20:24 • @BruceForte That is what the challenge asks: "Your challenge is to write a program or function that, with a certain input, prints the exact string Hello, World! and a newline." is an exact quote from the challenge. And indeed, that makes the answer trivial. – hvd Aug 5 '17 at 20:34 • @hvd Yeah, but if you look at the image of the OP his solution does not, that's where the main confusion comes from. – ბიმო Aug 5 '17 at 20:37 • @HyperNeutrino FWIW this is what I think the intended style of solution is. – Jonathan Allan Aug 6 '17 at 0:54 # Python 3 by rexroni print("Hello, World!") or __import__('sys').exit(0) Prevent any errors by exiting early. Try it online! # JavaScript (ES6) By Ephellon Dantzler {length:1, charCodeAt:()=>(e='Hello, World!', String.fromCharCode=()=>'')} Try it online! That was pretty easy. I noticed that any string inputs wouldn't be possible to output Hello, World! because the whole thing inside String.fromCharCode will only return multiples of 4, and ! has a char code of 33. So clearly we just have to hack the whole program. Hacking built-ins in JavaScript is trivial if one doesn't try to stop them (and even if one does so, there are usually lots of workarounds...). # JavaScript (ES6), Voile Simple proxy that only returns the wanted character every 3rd time it's called. var i = 0; var string = "Hello, World!"; var proxy = new Proxy([], { get: function(target, property) { if (!(++i%3)) { return string[property]; } return [1]; } }); Try it online! • Yes, this is the intended solution :) Proxy needs more love. – Voile Aug 7 '17 at 12:37 # Ruby, by Histocrat The magic input is: 1767707618171221 30191World! Try it online. • the number 1767707618171221 is a prime, and, written in base 36 it is "hello". When capitalized, this produces "Hello", which is printed using $><< • the line $><<", #$'"if/30191/ looks for the number 30191 in the input and writes to stdout a string composed of a comma, a space, and whatever is in the input after the 30191 (using the $POSTMATCH, which is referred here by its short variant, $'). # Lua 5.1 by tehtmi Pass this as the first argument: C=("").char;_G[C(112,114,105,110,116)](C(72,101,108,108,111,44,32,87,111,114,108,100,33)) Assuming the original code is in a file tehtmi.lua, run (in bash or a similar shell): lua tehtmi.lua 'C=("").char;_G[C(112,114,105,110,116)](C(72,101,108,108,111,44,32,87,111,114,108,100,33))' It also works on Lua 5.3, which is what TIO uses, so why don't you try it online? I haven't tested on a implementation that uses the "PUC-Rio's Lua 5.1" core (because I can't really find any information), but my solution probably also works there. ## How? It runs the first argument as code, but only if it contains less than 5 lowercase characters. The trick is to run print("Hello, World!"). Another way this can be run is using _G["print"]("Hello, World!"), which only uses strings. But we can't use the string directly due to the lowercase-count restriction, however, you can run ("").char to get the function string.char, that can convert from a series of bytes to a string. I assigned it to an uppercase variable (so we don't hit the limit) so we can use it to construct both the print and the Hello, World! strings that can be used like above. • Ah, well done! I was thinking of using next instead of char which doesn't work on Lua 5.3 due to randomization of iteration order. – tehtmi Aug 6 '17 at 22:15 # JavaScript (ES6) by Voile Input must be a string containing this: e( \ _\ =\ >\ "\ H\ e\ l\ l\ o\ ,\ \ W\ o\ r\ l\ d\ !\ "\ +\ \ \ ) Try it using this: const e=eval,p=''.split,c=''.slice,v=[].every,f=s=>(t=c.call(s),typeof s=='string'&&t.length<81&&v.call(p.call(t,\n),l=>l.length<3)&&e(t)(t)) input='e(\n\\\n_\\\n=\\\n>\\\n\"\\\nH\\\ne\\\nl\\\nl\\\no\\\n,\\\n \\\nW\\\no\\\nr\\\nl\\\nd\\\n!\\\n\"\\\n+\\\n\\\n\\\n)' console.log(f(input)) If you don't care about the trailing newline requirement for the output, you can replace the last 6 lines with this instead: !" ) ### How I did it The restrictions on the input are that it's a string, each line is two bytes long or less, and that the total length is 80 bytes or less. My first attempt after understanding that correctly was this: _ => \ H\ e\ l\ l\ o\ ,\ \ W\ o\ r\ l\ d\ ! Note: the \s are to ignore the newlines in the string in the input. This is incredibly crucial to the answer, and I can't believe that I stumbled upon it on accident. (I was familiar with it before but forgot about it) But this didn't work, since the => has to be on the same line as the arguments. Thankfully, I had the idea of wrapping something similar in a string and putting an eval in my input to reduce it to one line, resulting in my final answer. After the eval in the input occurs, the following is generated as a string (which is then eval'd to a function and then run): _=>"Hello, World!"+ This was really tough to crack, but I succeeded in the end. Also, first crack ever! # Cubically by MD XF I imagine this is always dangerous, but I think I have a crack, pending that I am correct in that there is a bug in the interpreter (which I just now compiled). My input 0 1 0 1 0 1 0 1 1 0 1 0 0 Output is Hello, World!\n\n\n\n\n\n\n\n\n\n..... with unending newlines. But I noticed the last section of code: :1/1+1$(@6)7 sets the notepad to 0x0A (newline) in a scary way, reads input to face 7 (the input face), and then prints the 6 face (notepad face) repeatedly until the 7 face is zero. If you set face 7 to zero, you should only get one newline. However, I got infinite newlines and my 13th input WAS a zero, and I can verify that the 7 face is always zero by inserting a "print number from 7 face" in the loop: :1/1+1$(@6%7)7 then it prints unending \n0 pairs. I am looking at the specs on the cubically github page and this behavior looks an awful lot like a bug. Changing the last character of the original program from a 7 to a 0 results in the expected behavior. The TIO interpreter exhibits the same incorrect behavior. • Should be fixed now, as soon as Dennis pulls Cubically it'll be fixed on TIO. – MD XF Aug 10 '17 at 16:36 # CJam, Erik the Outgolfer q5/:i:c 00072001010010800108001110004400032000870011100114001080010000033 Try it online! ### How? q - Take input (string) 5/ - split into chunks of five :i - cast to integers :c - cast to characters # C# (.NET Core), Grzegorz Puławski The solution I found makes very substantial use of unprintable characters, so attempting to paste it here didn't work well. The byte values for the input are: 109, 89, 4, 121, 3, 11, 8, 29, 37, 38, 27, 25, 72, 4, 4, 4, 3, 3, 3, 4, 4, 37, 3, 27, 4, 3 Or the string version is available in the input field of the TIO link. Try it online! The original program would take the distinct characters in the input, then reverse the input and multiply the elements of the two lists. Thus I created a string 26 characters long where the first 13 characters were distinct, the last 13 characters all also appeared in the first 13, and each pair of indexes [i, 26-i] multiplied to the byte value of the i-th character in Hello, World!. • Great job! You managed to omit the Take(a.First()-33) trap. It also made me realise I forgot Disctinct on the Reverse, but oh well, still made for a nice challenge I think. Also it had %255 so you could use higher numbers, in printable ASCII. – Grzegorz Puławski Aug 6 '17 at 10:36 # Ly, LyricLy n[>n]<[8+o<] 2 25 92 100 106 103 79 24 36 103 100 100 93 64 Try it here (although the page does not render the newline). n takes input, trys to split on spaces and cast to ints, these get put on the stack. o prints ordinal points, and 8+ does what one would think. So input needs to be 8 less than the codepoints in reverse order split by spaces. • This doesn't print a trailing newline, does it? – LyricLy Aug 6 '17 at 1:14 • Ah oops, easily rectifiable! – Jonathan Allan Aug 6 '17 at 1:16 • ...actually is it? I would have thought 2 would work - is that just the herokuapp page not rendering it? – Jonathan Allan Aug 6 '17 at 1:19 • Yeah, adding 2 works. The page just doesn't render trailing newlines. – LyricLy Aug 6 '17 at 1:19 # C (gcc), by Felix Palmen Original code: #define O(c)(((char**)v)+c) #define W(c)*(O(c)-**O(2)+x) main(x,v){puts(W(42));} Arguments: "Hello, World!" "," Try it online! Explanation: W(c) is calculating the address of a string from the argument list to print out. It starts with the address of the cth argument (O(c)), which is the 42nd argument in this case, and then subtracts the first character of the second argument (**O(2)) as an integer offset, and then adds x, which is the number of arguments. W(c) uses the second argument, so you know there need to be at least 3 of them (0, 1, 2). Then "Hello, World!" can go in the first argument, and then to address that argument you need a character whose ASCII value fits in the equation "42-x+3=1". That happens to be ",". • Great explanation, I'll edit my post as soon as I'm on a PC :) – Felix Palmen Aug 8 '17 at 6:01 # JavaScript: ThePirateBay I override the valueOf() and toString() methods of the parsed objects so that coercion fails with a TypeError. {"valueOf": 7, "toString": 7} • Umm, I can't seem to understand. Please elaborate or something? Especially the dalksdjalkdjaS djalksdjalksdja part, it somewhat confuses me. – Erik the Outgolfer Aug 5 '17 at 19:09 • @EriktheOutgolfer I edited out the spam part, but I have no idea why it was there. – NoOneIsHere Aug 8 '17 at 16:40 • @EriktheOutgolfer oh lol. That was passing to stop my answer from auto-converting to a comment. – Maltysen Aug 12 '17 at 5:23 • @Maltysen Well there's an edit button right there which can be useful next time ;) – Erik the Outgolfer Aug 12 '17 at 11:29 # 6502 Assembly (C64) - Felix Palmen The correct answer is ,52768,23 The explanation is slightly involved. 00 c0 ;load address 20 fd ae jsr$aefd ; checks for comma 20 eb b7 jsr $b7eb ; reads arguments The code first checks for a comma (syntax necessity) and then reads in two arguments, the first of which is a WORD, stored little-endian in memory location 0014 and 0015, the latter of which is stored in the X register. 8a TXA ;Store second argument into A (default register) 0a ASL ; bitshifts the second argument left (doubles it) 45 14 EOR$14 ; XOR that with the low byte of first argument 8d 21 c0 STA $c021 ; Drop that later in the program That's pretty clever, using our input to rewrite the program. It eventually rewrites the counter for the output loop at the end of the program. 45 15 EOR$15 ; XOR that with the high byte of the first argument 85 15 STA $15 ; Put the result in$15 49 e5 EOR #$e5 ; XOR that with the number$e5 85 14 STA $14 ; Put that in$14 Here comes the devious part: 8e 18 d0 STX $d018 ; stores the original second argument in d018 on the C64, d018 is a very important byte. It stores the reference points for things involving the screen's output. See here for more info. If this gets a wrong value, it will crash your C64. In order to print the requisite mixed upper and lowercase letters, this needs to be$17. Now we begin our output loop: a0 00 ldy #$00 ; zeroes out the Y register b1 14 lda ($14),y ; puts the memory referenced by the byte ; starting at $14 (remember that byte?) ; into the default register 20 d2 ff jsr$ffd2 ; calls the kernal routine to print the char stored in A c8 iny ; increment Y c0 0e cpy #$0e ; test for equality with the constant$0e That constant is what got written over before. It clearly determines how long the loop runs. It happens to be the right value already, but we need to stick 0e there again. d0 f6 bne *-8 ; jump back 8 bytes if y and that constant weren't equal 60 rts ; ends the program The rest is just the information which we need to print out, starting at memory address c025. So it's just following the math from there. • Absolutely correct, congrats. Might take me a while to properly edit my post, I'm on mobile now. – Felix Palmen Aug 10 '17 at 16:08 • The d018 was very clever, and i like how you posted a hint secretly. – A Gold Man Aug 10 '17 at 16:09 • d018 was the intended door-opener ... the hint was accidental, i meant $FF there, but then decided to leave it. – Felix Palmen Aug 10 '17 at 16:12 # 6502 Machine Code (C64) - Felix Palmen The correct answer is 8bitsareenough The code is rather complicated, involving a lot of self modifying. So instead of fully reverse engineering it, you can just use it to crack itself. Here's a slightly more helpful disassembly of the code, to help understand what happened. The syntax is for KickAssembler. *=$c000 // LOAD ADDRESS jsr $aefd //checks for a comma jsr$ad9e /*Reads in an argument. Stores length of it into $61, with the address of the stored arg in$62-3*/ jsr $b6a3 /*Evaluates the string, leaving the pointer on$22-3 and the length on A*/ //I think ldy #$00 loop: lda thedata,y cpy #$01 beq shuffle cpy #$07 beq shuffle cpy #$0b beq shuffle tricks: jsr output iny bne loop output: eor ($22),y //XOR's A with the y-eth letter of our input jmp$ffd2 //good old CHROUT, returns to tricks above thedata: .byte $f0,$48,$fa,$a2, $1c,$6d,$72,$30 .byte $06,$a9,$03,$48,$7c,$a3 shuffle: sta $c048 //drops A in mystery+4, over the constant lda$c026,y sta $c045 //overwrites the low byte of mystery lda$c027,y sta $c046 //overwrites the high byte of mystery ldx #$00 mystery: lda $aefd,x eor #$23 jsr output iny inx cpx #$03 bne mystery cpy #$0e bne loop eor #$1a sta$d018 rts Labelling it up like this was enough for me to see that the code XORs a bunch of constants that are hidden around in order to print out what we need. Since XOR is reversible, if you input the desired output, it'll tell you what the key is. So I switched the last line /*from sta $d018 to*/ jsr$ffd2 so it would print the last required input instead of crashing on a wrong input. And that's that! If there's any interest, I'll crack the code more. • Wow, this is in fact a huge shortcut, I probably should have forced a crash on wrong input earlier in processing. There was another shortcut possible btw, using the debugger of vice. But whatever, it's the correct solution. – Felix Palmen Aug 15 '17 at 7:18 • I didn't know that vice has a debugger. This is what I get for learning a language just to crack answers. – A Gold Man Aug 15 '17 at 7:20 • Edited my post, with some explanations. Good job, just modifying the program not to crash of course is a quite obvious way, I didn't think of this. – Felix Palmen Aug 15 '17 at 7:31 • Nitpick: "drops A in mystery+1, over the constant" <- that's actually mystery+4 with your label. Offests are in bytes :) and FWIW, self-modification is quite common even in serious 6502 code, as long as the code runs from RAM. – Felix Palmen Aug 15 '17 at 9:14 # Explode, Step Hen @_?&4_-j>5&f^~c>&6\|4>7 Rh/qi?,Wcr+du Try it online! • Trying to work out what all the "explorers" are actually doing hurts one's head too much, so I just reverse engineered it (literally :p - starting from the rightmost character I offset each in turn along [and around] the printable character range). • Good job :) I'll add an explanation in a couple hours when I'm back at my PC – Stephen Aug 6 '17 at 12:42 # C (tcc) by Joshua int puts(const char *s) { printf("Hello, World!%s", s); } Try it online! • Darn. I had this exact crack but couldn't get it working on TIO. +1 – MD XF Aug 6 '17 at 4:43 • Sorry 'bout that. Fwiw it's fixed now. – Dennis Aug 6 '17 at 4:50 • Do you know what the problem was? – MD XF Aug 6 '17 at 20:31 • I know how I fixed it (tcc had to be built with SELinux support), but I'm not sure what that does or why it was needed in the first place. – Dennis Aug 6 '17 at 23:32 # MATL by Luis Mendo tsZp?x Try it online! # Jelly by Jonathan Allan Original code: œ?“¥ĊɲṢŻ;^»œ?@€⁸ḊFmṪ⁷ Try it online! Input: 1,2586391,2949273,3312154,3312154,1134001,362881,2223505,766081,1134001,1497601,3312154,1860601,140 Explanation: Mainly, it is necessary to understand what the code does. The first thing it does is it takes the string "!,Word Hel" (with all the needed characters except newline) and creates a bunch of permutations of them. The inputs specify permutation numbers, and each pair of permutations from the input is applied to the string exluding pairs where the first permutation is applied first. Basically, P2(P1(S)), P2(P2(S), P2(P3(S)), ..., P2(PN(S)), P3(P1(S)), ..., P3(PN(S)), ... ..., PN(P1(S)), ..., PN(PN(S)). These are all concatenated together. Then the last input is reused to take every PNth character from this big string. So, I take PN = len(S)*N = 10*N. This means we'll take the first character of P2(P1(S)), the first character of P3(P1(S)), all the way up to the first character of PN(P1(S)). To simplify further, I let P1 = 1 which is the identity permutation. Then it suffices to choose any P2 that permutes "H" into the first position, a P3 that permutes "e" into the first position and so on. Luckily, small permutation numbers like the one already chosen for PN don't affect the earlier characters in the string, so PN leaves "!" at the beginning of the string. (If this wasn't true, it would still be possible to solve by choosing a different P1.) • It takes 14 permutations of the list of the 14 permutations, flattens, dequeues and then takes every 140th character. – Jonathan Allan Aug 6 '17 at 13:12 # C (GCC on TIO) by MD XF 4195875 Try it online! # How? It tries to print the second argument as a string, which is a pointer we can determine on the input. It just so happens that at location memory 4195875 starts the "Hello, World!\n" string. The number was determined by adding print("%p", "Hello, World!"); before the printf, converting the hexadecimal number to decimal and tried it out on the original TIO. However, it showed me the printf format string. By trying out some numbers, I found out that the string is located before the format string. So in memory, it would look like this (as a C string): Hello, World!\n\0%2$s\0 This also means that any update to the compiler might break the solution. # JavaScript (Node.js) By Voile ['Hello, World!','','','','','','','','','','','',''] Try it online! # Röda by fergusq [1, "HH", 1, "eellloo", 1, ",,", 1, " WWoo", 1, "rr", 1, "ll", 1, "dd", 1, "!!"] Try it online! Explanation coming in a bit... # JavaScript (ES6), Voile var i = 0; var string = "${t}"; Try it online! • That's not the intended solution :) I'll fix it and be back later. – Voile Aug 7 '17 at 11:44 # PHP by LP154 PDO,FETCH_CLASSTYPE Try it online! (TIO doesn't have the PDO class defined, so the above demo uses a codepad that does) Another solution: ReflectionFunction,IS_DEPRECATED Try it online! # JavaScript (ES6), Voile Given the 81 character limit, this probably isn't the intended solution global.g = ()=>"Hello, World!"; var str = "g"; ` Try it online! • Genious. passing one char string with global manipulation. Seems, that it is intended solution – Евгений Новиков Aug 7 '17 at 16:13 • That's not the intended solution :( – Voile Aug 7 '17 at 16:18 • Now that I'm thinking about it, I don't think it's allowed to do tons of stuff like this before passing input in a function, unless said stuff can also be composed as part of the input (e.g inside the function which will be evaluated inside), since the challenge requires you to make a input that will result in 'Hello, World!', not a full program. Otherwise pretty much nothing can be safe. So I think this crack might be invalid? – Voile Aug 7 '17 at 16:31 • @Voile It very well may be. I think it's fair to read the task as if the input must stand alone (i.e. without changes to the external system) - especially if this isn't the intended solution :) No need to mark your cop as cracked. – Birjolaxew Aug 7 '17 at 16:39 • I've made a meta post discussing about this. Feel free to leave some discussions there. – Voile Aug 7 '17 at 18:06
2021-04-18 08:52:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3872031271457672, "perplexity": 2757.242961613365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038469494.59/warc/CC-MAIN-20210418073623-20210418103623-00176.warc.gz"}
https://proxieslive.com/how-to-compute-the-coefficient-of-a-generating-function/
# How to compute the coefficient of a generating function? I would like to find a closed formula for the coefficients of the generating $$f(x)=-{\frac {{x}^{4}+6\,{x}^{3}-2\,{x}^{2}+6\,x+1}{ \left( x+1 \right) \left( {x}^{2}+1 \right) \left( x-1 \right) ^{3}}}$$. I am trying to do the following. \begin{align} f(x)=\sum_{i,j,k_1,k_2,k_3=0}^{\infty} (-x)^i (-x^2)^j x^{k_1+k_2+k_3}(1+6x-2x^2+6x^3+x^4). \end{align} The expression is very complicated. Are there some simpler method (or some software) to find a closed formula for the coefficients? Thank you very much.
2020-12-01 15:05:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9879692792892456, "perplexity": 116.08173635474134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141674594.59/warc/CC-MAIN-20201201135627-20201201165627-00451.warc.gz"}
https://www.mometrix.com/academy/cosine/
# Cosines ## Upper Level Trigonometry: Cosine Trigonometry, which in Greek roughly means “triangle measure,” is a branch of mathematics that studies the relationships between the side lengths and the angles of triangles. Cosine is one of the “big three” trigonometric functions, along with Sine and Tangent. Its primary use is finding the length of the side of a right triangle that is adjacent, or connected to, an acute angle with a known measure. Today, we’re going to dig deeper into what cosine functions are all about and take a look at some problems where you’ll find one. Let’s get started! Let’s jump right in and give it a try with this example problem. Cosine, which is commonly abbreviated to three letters as C-O-S, is the ratio of the side adjacent the angle we know, or want to know, over the hypotenuse of the right triangle. Remember that the hypotenuse is the longest side and is always opposite the right angle, which is the largest angle in any right triangle. And a ratio is just a fraction, so we can write it like this: Cosine of the angle is equal to the adjacent over the hypotenuse or a over h. If you’ve studied trigonometry before, you’re probably familiar with “sohcahtoa”, which is a way to remember how each of the big three trig functions are formed. The second part of it, CAH, helps us remember that Cosine is equal to the length of the Adjacent side divided by the length of the Hypotenuse. In this problem, we know one angle and two of the sides. The side we don’t know is adjacent to the angle we know, so we need to use Cosine to find it. We do so by setting up a simple equation, like this: Cosine of 36.87 degrees is equal to x over 5. The cosine of the angle we know, 36.87 degrees, is equal to the length of the adjacent side, which we’re trying to find, so we use a variable over the length of the hypotenuse, which is five. From here, we can find the cosine of 36.87 degrees on a calculator. We type in 36.87 and hit the COS key to find that it is equal to 0.799, which we can round to 0.800. So now our equation looks like this: point 8 equals x over 5. Multiplying both sides by 5 results in 4.000 = x. So the measure of the adjacent side is four! You may have recognized that this is a 3-4-5 special right triangle, since the hypotenuse is 5 and the other two sides measure 3 and 4. We see this special triangle often in standardized tests, so it’s a good idea to try to recognize them when they appear. Note that the sides can also be multiples of 3-4-5, so 6-8-10 or 9-12-15 triangles are also 3-4-5 right triangles. Okay, back to cosine! We can also use cosine to find an angle of a right triangle if we know the length of the adjacent side and the length of the hypotenuse, which is always the side opposite the right angle. Let’s try one of those: Let’s see what we know. It’s a right triangle, so trigonometry can definitely be used. We’re trying to find an angle, since it’s marked with an x. We know the hypotenuse, which is 13, and we know the side that is adjacent to that angle we want to know. Since we know the adjacent side and the hypotenuse, that means we need to use cosine. Let’s set up our equation: Cosine of the angle equals adjacent over hypotenuse equals a over h. So, cosine of x is equal to 12 over 13 In the first problem we did, we took the cosine of an angle and the calculator gave us a number. For this problem, we want to do the opposite. We know the value of the cosine of our angle and want to find the measure of the angle. So what we need is the ARC COS, which is often labeled as ACOS or, even more commonly, as COS to the negative 1 power, or COS inverse, as you’ll see on most HP calculators. This is the inverse operation of COS. This is accessed by hitting the 2nd key and then the COS key. Okay, so we type in 12 over 13 and then 2nd and then COS and our calculator displays 22.619, or approximately 22.62 degrees. We’ve found our answer. That answer makes sense when looking at our triangle. It’s important to remember that you should only trust the proportions of the diagram if it is drawn to scale. On standardized tests it will often be indicated if the drawing is not to scale. Cosine functions can also be graphed. Let’s set up a graph of the basic Cosine function: This is a Cosine wave. It looks a lot like the more famous Sine wave, but it’s phase-shifted by pi over 2 radians. Let’s overlap the Cosine and Sine functions so we can see the shift: We can clearly see that they have the same shape. The Sine wave in green starts with a y-value of 0 when x is 0, while the Cosine wave starts with a y-value of 1. Just like a Sine wave, a Cosine wave has a maximum y-value of 1 and it’s minimum y-value is -1. This is true when the Cosine or Sine function hasn’t been modified. We can change the height, or the amplitude, of these functions by multiplying them by a coefficient, like this: y equals two times cosine x. y = 2 cos x This will simply double the y-values of our base function y = cos x. Let’s take a look at this with our original cosine function in red and our height-doubled cosine function in blue: We can see that our functions have the same frequency, which means that they have the same period, or distance between their peaks. But the two graphs do have a different amplitude. The blue graph is stretched vertically and has y-values as high as 2 and as low as -2, just as we’d expect. And that’s all there is to it! As you can see, cosine comes in handy when graphing, solving problems with angles, and determining trigonometric identities. Thanks for watching, and happy studying! 605566361120 by Mometrix Test Preparation | Last Updated: August 6, 2020
2020-11-25 22:41:43
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.854519784450531, "perplexity": 310.4498401802894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141184870.26/warc/CC-MAIN-20201125213038-20201126003038-00421.warc.gz"}
https://mingusspeaks.com/working-at-rnv/4ebe40-control-charts-for-variables-and-attributes-pdf
CONTROL CHARTS FOR VARIABLES AND ATTRIBUTES PDF WRITER >> DOWNLOAD CONTROL CHARTS FOR VARIABLES AND ATTRIBUTES PDF WRITER >> READ ONLINE Four widely-used attributes control charts are: 1. p chart: 7.1 The p-chart and np-chart with Equal Sample Sizes control chart with variable sample size. Attributes and Variables Control ChartIII Example7.7: AdvantageofVariablesC.C. manuf. Lecture 6- Control chart for variables.pdf - Free download as PDF File (.pdf), Text File (.txt) or read online for free. → This data can be used to create many different charts for process capability study analysis. For example, we might measure the number of out-of-spec handles in a batch of 50 items at 8:00 a.m. and plot the fraction non-conforming on a chart. Therefore, we used the. And continuous variables generally require more refined equipment and time to complete … New Attributes and Variables Control Charts under Repetitive Sampling Muhammad Aslam, Muhammad Azam Department of Statistics, Forman Christian College University. Attribute Control Charts. Variable control charts for measured data. Attribute data are counted and cannot have fractions or decimals. PPT Slide. Choosing Proper Type of Control Chart. Attribute control charts for counted data. During the 1920's, Dr. Walter A. Shewhart proposed a general model for control charts as follows: Shewhart Control Charts for variables: Let $$w$$ be a sample statistic that measures some continuously varying quality characteristic of interest (e.g., thickness), and suppose that the mean of $$w$$ is $$\mu_w$$, with a standard deviation of $$\sigma_w$$. X charts: for individual measures; uses moving ranges. Introduction to Control Charts Variables and Attributes . Variables control charts are used to evaluate variation in a process where the measurement is a variable--i.e. shift) before defectives are manufactured, since Attribute. Control Charts for Variables 178 Control Charts for Attributes 184 C-Charts 188 Process Capability 190. n»3Ü£ÜkÜGݯz=ĕ[=¾ô„=ƒBº0FX'Ü+œòáû¤útøŒûG”,ê}çïé/÷ñ¿ÀHh8ðm W 2p[àŸƒ¸AiA«‚Ný#8\$X¼?øAˆKHIÈ{!7Ä. Attribute charts are a kind of control chart where you display information on defects and defectives. Determining Which Characteristics and Where to Put Control ChartsWhere to Put Control Charts. New control charts under repetitive sampling are proposed, which can be used for variables and attributes quality characteristics. Variables control charts provide an indication of impending trouble. Table of Contents. Attributes control charts will not react unless the process has already changed (more nonconforming items may be produced. Introduced in 1926 by WALTER SHEWART, who concluded that a distribution can be transformed into normal shape by estimating mean and standard deviation. Variable data are measured on a continuous scale. x-bar chart, Delta chart) evaluates variation between samples. PPT Slide. Control Charts for Attributes: The X̅ and R control charts are applicable for quality characteristics which are measured directly, i.e., for variables. Control Charts - What’s Going On? Understand the concept of the control chart method. The time series chapter, Chapter 14, deals more generally with changes in a variable over time. Md and R charts: for sample medians and ranges. Effective Applications of Control Charts … focus on variables charts. Within these two categories there are seven standard types of control charts. Variables control charts. Attribute. Control Chart Control charts help to separate. Attributes control charts have historically been used with 3-sigma limits. Attribute Control Charts. Types of the control charts •Variables control charts 1. Example 5-4. The format of the control charts is fully customizable. Types of Control Charts Control Charts for Attributes … Concept of the Control Chart. eñ1ȊEŽÑBJN½äU½|\•y5'çër™-¼è2„¤9ON+TöW³EÎx2®òåOæx2y¼Ï½*…\÷ÕºL~ŝhcq›´oRÙuúe5]ß«YrY¬ú«MÑð£¢ÜTÃyVֻނû렝|Ë¢Š°6?ÜVȤ|È}D“õUh9îÑ\ñG‰`à¿†¤q-Tm ,¦ŽÖ”bF Ó®öIèaæ ˜ÑJÐd© «)ÉÓ&Òúӄ,Qª0¢_Åq´ÂóR’†0³iÐiãL锩T0¥¯1"K1b¬ð. This procedure permits the defining of stages. Control Charts This chapter discusses a set of methods for monitoring process characteristics over time called control charts and places these tools in the wider perspective of quality improvement. Control charts deal with a very specialized Variable vs. P and np Control Charts. the variable can be measured on a continuous scale (e.g. For chart:x For chart:s. s2 CoCo t o C a tntrol Chart Sometimes it is desired to use s2 chart over s chart. arises. View LAB 4- CONTROL CHART FOR VARIABLE AND ATTRIBUTE.pdf from FKE 1032 at Sekolah Menengah Kebangsaan Abdul Jalil. variables: continuous random variable attributes: discrete random variable The variables charts: • offer more information, more sensitive to changes, the signal the special causes (e.g. Attribute control charts are utilized when monitoring count data. However, attribute control charts can cover several defect types on one chart, where two charts (x-bar and R- or (-Charts are required for each single characteristic to be measured. There are two categories of count data, namely data which arises from “pass/fail” type measurements, and data which arises where a count in the form of 1,2,3,4,…. Introduction to Control Charts Variables and Attributes 5/14/99 Click here to start. Next is a table that presents a summary of the gener al control chart formulas for the more commonly used Shewhart control charts. Variable vs. … height, weight, length, concentration). Subgroups of 2 to 30 samples may be used when These are often refered to as Shewhart control charts because they were invented by Walter A. Sara Gradara 7 25 • X-bar chart: based on the average of a subgroup. Control Charts for Variables Expected Outcomes Know the three categories of variation and their sources. Know the purpose of variable control charts. One (e.g. For example, if There are instances in industrial practice where direct measurements are not required or possible. General control charts for attributes. Control Charts for Variables 2. xs and Control Charts with Variable Sampland Control Charts with Variable SampleSizee Size. Variables control charts provide an indication of impending trouble (corrective action may be taken before any defectives are produced). Anatomy of a control chart To understand how control charts work, it’s helpful to examine their components. As shown in Figure 1, a control chart has points, a centerline, and control limits. Applied to data with continuous distribution •Attributes control charts 1. Helps you visualize the enemy – variation! 1. Comparison of variables and attributes control charts . Åî”Ý#{¾}´}…ý€ý§ö¸‘j‡‡ÏþŠ™c1X6„Æfm“Ž;'_9 œr:œ8Ýq¦:‹ËœœO:ϸ8¸¤¹´¸ìu¹éJq»–»nv=ëúÌMà–ï¶ÊmÜí¾ÀR 4 ö PPT Slide. Control Charts for Variables: These charts are used to achieve and maintain an acceptable quality level for a process, whose output product can be subjected to […] ADVERTISEMENTS: This article throws light upon the two main types of control charts. 2. Control Charts for Attributes. Variable Control Charts have limitations must be able to measure the quality characteristics in numbers may be impractical and uneconomical e.g. The data for the subgroups can be in a single column or in multiple columns. Variable and attribute data 23 Control Chart selection 24 • X-bar chart • R chart • s chart • Individual chart • Moving Range chart Types of Variable Control Chart. 7 Control Charts for Variables 289 7-1 Introduction and Chapter Objectives, 289 7-2 Selection of Characteristics for Investigation, 290 7-3 Preliminary Decisions, 292 ... 8-10 Operating Characteristic Curves for Attribute Control Charts, 400 Summary, 403 Key Terms, 403 . Because control limits for attributes data are often computed in ways quite different from control limits for variables data. Like variables control charts, attributes control charts are graphs that display the value of a process variable over time. These are discussed in the sections on variables and attribute charts. Difference between control charts for variables and attributes. Just like the name would indicate, Attribution Charts are for attribute data – data that can be counted – like # of defects in a batch.. Quality control is the subject of This chapter often computed in ways quite different from limits... Were invented by WALTER a a continuous scale ( e.g variable SampleSizee.. Their components required or possible the three categories of variation and their sources weight, distance or temperature can measured... Teknologi KEJURUTERAAN MEKANIKAL DAN PEMBUATAN UNIVERSITI TEKNIKAL MALAYSIA control charts under repetitive sampling are proposed, which can be to. Mekanikal DAN PEMBUATAN UNIVERSITI TEKNIKAL MALAYSIA control charts under repetitive sampling Muhammad Aslam Muhammad. College University process has already changed ( more nonconforming items may be taken any... X-Bar and R control charts deal with a very specialized attributes control charts will not unless! Have historically been used with 3-sigma limits numbers may be impractical and uneconomical e.g is! Format of the gener al control chart has points, a control chart formulas for the subgroups be! To as Shewhart control charts variables and attributes quality characteristics in numbers be... The gener al control chart is used when the quality characteristic can be measured numerically x and s:... Action may be taken before any defectives are produced ) be used to create different... From control limits for attributes data are counted and can not have fractions or decimals variables. Measured in fractions or decimals 14, deals more generally with changes in a variable chart. An improvement project shown in Figure 1, a control chart to understand how charts! Chart where you display information on defects and defectives often computed in ways quite different control. Fractions or decimals and can not have fractions or decimals within these two categories there are instances in industrial where... Measure the quality characteristic can be used to create many different charts for variables Expected Know... At Sekolah Menengah Kebangsaan Abdul Jalil which can be in a variable over time the quality characteristics numbers. Ways quite different from control limits you display information on defects and defectives and defectives continuous scale (.... Of Statistics, Forman Christian College University with variable Sampland control charts for variables Expected Outcomes Know three! Series chapter, chapter 14, deals more generally with changes in a single column or in multiple.. Variable over time Azam Department of Statistics, Forman Christian College University that a distribution can in. Charts deal with a very specialized attributes control charts have two general uses in an improvement project fully customizable Click... Sections on variables and attribute charts are utilized when monitoring count data numbers may impractical... The average of a subgroup over time with 3-sigma limits and R charts introduction This procedure generates x-bar R... For sample means and standard deviations ; uses moving ranges subgroups can be a. Measurements are not required or possible next is a device which specifies the state of statistical control time! Variables and attributes 5/14/99 Click here to start and can not have fractions or decimals Forman Christian University... Subject of This chapter changed ( more nonconforming items may be taken before any defectives are produced ) can!, distance or temperature can be used for variables Expected Outcomes Know the categories... Two main types of control charts 1 control charts for variables and attributes pdf to Put control ChartsWhere to control... Measured numerically centerline, and control charts provide an indication of impending trouble ( corrective may. Charts variables and attribute charts are utilized when monitoring count data be transformed into normal by. Statistics, Forman Christian College University uses in an improvement project be used for variables data quite from! Chart formulas for the subgroups can be transformed into normal shape by estimating mean and deviation... Categories of variation and their sources LAB 4- control chart is a device which specifies the of. Some cases, the choice will be clear-cut variable over time under repetitive sampling Aslam. Trouble ( corrective action may be produced and defectives sections on variables and attributes 5/14/99 Click to. Kind of control charts with variable Sampland control charts work, it s! Are discussed in the sections on variables and attributes quality characteristics in numbers may taken! Formulas for the subgroups can be measured on a continuous scale ( e.g, chapter 14 deals. Statistics, Forman Christian College University on defects and defectives a device which specifies state... For example: time, weight, distance or temperature can be in a variable over time examine their.. Applied to data with continuous distribution •Attributes control charts because they were invented by WALTER SHEWART who! Variable SampleSizee Size used Shewhart control charts work, it ’ s helpful to examine their....
2021-06-24 00:02:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37582463026046753, "perplexity": 3455.9761047573065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488544264.91/warc/CC-MAIN-20210623225535-20210624015535-00402.warc.gz"}
https://mathhelpboards.com/threads/primitive-root-challenge.4527/
# Primitive root challenge #### Poirot ##### Banned Show that if $x$ is a primitive root of p, and $x^{p-1}$ is not congruent to 1 mod$p^2$, then x is a primitive root of $p^2$ Last edited: #### caffeinemachine ##### Well-known member MHB Math Scholar Re: primitive root challenge Show that if $x$ is a primitive root of p, and $x^{p-1}$ is not congruent to 1 mod$p^2$, then x is a primitive root of $p^2$ We assume $p$ is an odd prime. By Fermat, $x^{p-1}\equiv 1\pmod{p}$. Thus $x^{p-1}=pk+1$. By hypothesis, $p\not |k$. Let order of $x$ mod $p^2$ be $n$. Then $x^n\equiv 1\pmod{p}$, therefore $(p-1)|n$ (since order of $x$ mod $p$ is $p-1$). Write $n=l(p-1)$. So we have $x^n=(pk+1)^{l}\equiv 1\pmod{p^2}$. So we have $lpk+1\equiv 1\pmod{p^2}$. Thus $p|(lk)$. Since $p$ doesn't divide $k$, we have $p|l$ and now its easy to show that $n=p(p-1)=\varphi(p^2)$ and we are done. #### Poirot ##### Banned Re: primitive root challenge So we have $x^n=(pk+1)^{l}\equiv 1\pmod{p^2}$. So we have $lpk+1\equiv 1\pmod{p^2}$. What is the logic in this step? Also, note the result is vacuously true when p=2. #### caffeinemachine ##### Well-known member MHB Math Scholar Re: primitive root challenge What is the logic in this step? Also, note the result is vacuously true when p=2. Hello Poirot, By Binomial expansion we have $(1+pk)^l=1+pkl+p^2t$ for some integer $t$. Now it should be clear I believe.
2021-03-06 02:45:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9824240207672119, "perplexity": 503.4994111734965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374217.78/warc/CC-MAIN-20210306004859-20210306034859-00149.warc.gz"}
https://matchmaticians.com/questions/hguk5e/integrate-int-x-2-1-x-2-frac-3-2-dx-calculus-integrals
# Integrate $\int x^2(1-x^2)^{-\frac{3}{2}}dx$ I am tired of trying to evaluate $$\int x^2(1-x^2)^{-\frac{3}{2}}dx.$$ I am substituting $x=\sin \theta$, but get stuck in the remaining calculations. A detailed answer would be greatly appreciated.
2023-01-31 10:35:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9911465644836426, "perplexity": 259.8894278128405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499857.57/warc/CC-MAIN-20230131091122-20230131121122-00573.warc.gz"}
http://docmadhattan.fieldofscience.com/2017/01/
### Life on Mars This Voyager spacecraft was constructed by the United States of America. We are a community of 240 million human beings among the more than 4 billion who inhabit the planet Earth. We human beings are still divided into nation states, but these states are rapidly becoming a single global civilization. We cast this message into the cosmos. It is likely to survive a billion years into our future, when our civilization is profoundly altered and the surface of the Earth may be vastly changed. Of the 200 billion stars in the Milky Way galaxy, some--perhaps many--may have inhabited planets and spacefaring civilizations. If one such civilization intercepts Voyager and can understand these recorded contents, here is our message: This is a present from a small distant world, a token of our sounds, our science, our images, our music, our thoughts, and our feelings. We are attempting to survive our time so we may live into yours. We hope someday, having solved the problems we face, to join a community of galactic civilizations. This record represents our hope and our determination, and our good will in a vast and awesome universe. Jimmy Carter The IAU firmly opposes any discrimination based on factors such as ethnic origin, religion, citizenship, language, and political or other opinion and therefore expects U.S. officials to not discriminate on the basis of religion. ### JMP 57, 12: quantum mechanics for the universe I will begin to publish posts in a more continuous way, also recovering some drafts that I'm not able to conclude in 2016. For now I propose you a selection of articles from the Journal of Mathematical Physics, vol.57, issue 12: Andersson, A. (2016). Electromagnetism in terms of quantum measurements Journal of Mathematical Physics, 57 (12) DOI: 10.1063/1.4972287 We consider the question whether electromagnetism can be derived from the theory of quantum measurements. It turns out that this is possible, both for quantum and classical electromagnetism, if we use more recent innovations such as smearing of observables and simultaneous measurability. In this way, we justify the use of von Neumann-type measurement models for physical processes. We apply the operational quantum measurement theory to gain insight into fundamental aspects of quantum physics. Interactions of von Neumann type make the Heisenberg evolution of observables describable using explicit operator deformations. In this way, one can obtain quantized electromagnetism as a measurement of a system by another. The relevant deformations (Rieffel deformations) have a mathematically well-defined "classical" limit which is indeed classical electromagnetism for our choice of interaction. Aerts, D., & Sassoli de Bianchi, M. (2016). The extended Bloch representation of quantum mechanics: Explaining superposition, interference, and entanglement Journal of Mathematical Physics, 57 (12) DOI: 10.1063/1.4973356 An extended Bloch representation of quantum mechanics was recently derived to offer a possible (hidden-measurements) solution to the measurement problem. In this article we use this representation to investigate the geometry of superposition and entangled states, explaining interference effects and entanglement correlations in terms of the different orientations a state-vector can take within the generalized Bloch sphere. We also introduce a tensorial determination of the generators of $SU(N)$, which we show to be particularly suitable for the description of multipartite systems, from the viewpoint of the sub-entities. We then use it to show that non-product states admit a general description where sub-entities can remain in well-defined states, even when entangled. This means that the completed version of quantum mechanics provided by the extended Bloch representation, where density operators are also considered to be representative of genuine states (providing a complete description), not only offers a plausible solution to the measurement problem but also to the lesser-known entanglement problem. This is because we no longer need to give up the general physical principle saying that a composite entity exists and therefore is in a well-defined state, if and only if its components also exist and therefore are also in well-defined states.
2019-04-23 00:50:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38392525911331177, "perplexity": 894.0112094379848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578583000.29/warc/CC-MAIN-20190422235159-20190423021159-00541.warc.gz"}
http://www.techbriefs.com/component/content/article/ntb/tech-briefs/information-sciences/19
Information Technology & Software ### Nearly optimal performance can be obtained with less computation. An alternative to an optimal method of automated classification of signals modulated with M-ary phase-shift-keying (M-ary PSK or MPSK) has been derived. The alternative method is approximate, but it offers nearly optimal performance and entails much less complexity, which translates to much less computation time. Modulation classification is becoming increasingly important in radio-communication systems that utilize multiple data modulation schemes and include software-defined or software-controlled receivers. Such a receiver may “know” little a priori about an incoming signal but may be required to correctly classify its data rate, modulation type, and forward error-correction code before properly configuring itself to acquire and track the symbol timing, carrier frequency, and phase, and ultimately produce decoded bits. Modulation classification has long been an important component of military interception of initially unknown radio signals transmitted by adversaries. Modulation classification may also be useful for enabling cellular telephones to automatically recognize different signal types and configure themselves accordingly. The concept of modulation classification as outlined in the preceding paragraph is quite general. However, at the present early stage of development, and for the purpose of describing the present alternative method, the term “modulation classification” or simply “classification” signifies, more specifically, a distinction between M-ary and M'-ary PSK, where M and M' represent two different integer multiples of 2. Both the prior optimal method and the present alternative method require the acquisition of magnitude and phase values of a number (N) of consecutive baseband samples of the incoming signal + noise. The prior optimal method is based on a maximum-likelihood (ML) classification rule that requires a calculation of likelihood functions for the M and M' hypotheses: Each likelihood function is an integral, over a full cycle of carrier phase, of a complicated sum of functions of the baseband sample values, the carrier phase, the carrier-signal and noise magnitudes, and M or M'. Then the likelihood ratio, defined as the ratio between the likelihood functions, is computed, leading to the choice of whichever hypothesis — M or M' — is more likely. In the alternative method, the integral in each likelihood function is approximated by a sum over values of the integrand sampled at a number, I, of equally spaced values of carrier phase. Used in this way, I is a parameter that can be adjusted to trade computational complexity against the probability of misclassification. In the limit as I ? , one obtains the integral form of the likelihood function and thus recovers the ML classification. The present approximate method has been tested in comparison with the ML method by means of computational simulations. The results of the simulations have shown that the performance (as quantified by probability of misclassification) of the approximate method is nearly indistinguishable from that of the ML method (see figure). This work was done by Jon Hamkins of Caltech for NASA’s Jet Propulsion Laboratory. For further information, access the Technical Support Package (TSP) free on-line at www.techbriefs.com/tsp under the Information Sciences category. NPO-40965 ### This Brief includes a Technical Support Package (TSP). Less-Complex Method of Classifying MPSK (reference NPO-40965) is currently available for download from the TSP library.
2017-07-22 12:39:06
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8560882806777954, "perplexity": 1042.061840915158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424060.49/warc/CC-MAIN-20170722122816-20170722142816-00399.warc.gz"}
https://www.physicsforums.com/threads/bush-on-north-korea.135615/
# Bush on North Korea • News Gold Member Bush has steadfastly refused to talk to North Korea after defining it as one member of the "Axis of Evil" and he has adamantly refused to seek broad-based diplomacy, choosing to "go it alone". It is interesting that he now calls for the "international community" to face up to North Korea. Where is this jerk coming from? He cuts our ties with the international community, alienates most everybody in NATO, appoints an interim embassador to the UN who wants to disband the UN, and then pleads for back-up? Bush is a joke! We need to elect a congress that will stand up to him, reject his "signing statements" and force him to obey the rule of law. We don't need a dictator running the US. http://news.yahoo.com/s/ap/20061009/ap_on_go_pr_wh/us_nkorea;_ylt=Asv.zPA1YNpvcqNpXW_waJKs0NUE;_ylu=X3oDMTA2Z2szazkxBHNlYwN0bQ-- [Broken] Last edited by a moderator: Bystander Homework Helper Gold Member Who's walked out on bi-partite, tri-partite, and two or three combinations of n-party talks? Not the Russians. Not the Chinese. Not the South Koreans. Not the Japanese. Not the U.S.. I may be mistaken, but isn't Bush the one who wants to continue the 6 party talks, while NK insists on 1 on 1 negotiations with the US? Gold Member ptabor said: I may be mistaken, but isn't Bush the one who wants to continue the 6 party talks, while NK insists on 1 on 1 negotiations with the US? Do we reject diplomacy when the pool of participants is reduced to 2 parties? Why? To "save face" that would be lost by negotiating with an enemy that the administration previously swore they would not negotiate with? That is not sufficient cause. Is it to avoid the perception that the US and NK are negotiating as equal partners? That is absurd on the face of it and is not sufficient cause. If there is one good reason why our diplomatic corps should not negotiate privately with NK (if only to stop the proliferation of ballistic missiles), please let me know. This administration has firmly rejected diplomacy in many instances, leaving international sanctions or military action activity as their only options. The problem with the former is that the administration has squandered the good-will that we once enjoyed with the international community, making sanctions difficult to apply and enforce. I believe we all know the problems that accompany the latter approach. well, I'm playing devil's advocate here... I speculate that the administration's reasoning for not entering into 1 on 1 talks is that they want to maintain an air of legitimacy about the process. They could also want to avoid the appeasement policy that was tried in the past, and obviously failed. To be honest, I don't know the intentions and motivations of this administration, so these are my "best guesses" - first order approximations, if you will. I think you may have contradicted yourself. On the one hand, you state that Bush is adopting the go-it-alone policy on NK, yet in the other you state that Bush refuses to enter into 1 on 1 negotiations. Perhaps I'm misunderstanding you. As far as what should be done... I'm not certain how sanctions will pan out. From what I understand their main source of aid is China/Russia - and I'm not convinced they will actually abide by internationally imposed sanctions. As far as our goodwill is concerned, you are correct in that we have squandered it. I don't think this will be a problem in this situation, as most countries recognize that it is in their best interest to keep nukes out of the hands of NK. Surely they will not go against their own interests just to spite the US. Astronuc Staff Emeritus There are several reasons to avoid unilateral discussions with NK, with the biggest being it gives the appearance to the world that NK is on the same level as the US. Other reasons include the fact the NK is a neighbor of China and Russia, so its appropriate they be involved. The US cannot afford unilateral negotiations with every single nation - it is just too impractical. Unilateral discussions with Canada and Mexcio make sense only because both nations border the US. Even NAFTA involved multilateral discussion with Canada, Mexico and US. It is appropriate that the UN plays a role, since what NK does affects all nations. Gold Member The go-it-alone policy is the standard Bush ploy. In other words, the the administration does not wish to compromise with NATO, the UN, or other global partners when pursuing foreign policy policy goals. Just rattle the sabre and rely on the US's military superiority to force compliance. That is a one-dimensional approach to foreign policy that is bound to keep us in war and in debt forever as real needs and grievances go unaddressed. The refusal of the US administration to have even private discrete 1-on-1 talks with NK is emblematic of the John Wayne approach in which overwhelming force can be brought to bear against any problem, no matter how complex and nuanced. Life ain't the movies, though. I would love for our allies and rival superpowers to forgive the slights of the Bush administration and join in meaningful negotiations with NK (and impose and enforce sanctions, if necessary), but the administration has got to make some conciliatory moves to our partners, and I don't see it happening. OT Note: I am WAY more socially liberal than the Democratic party and WAY more fiscally conservative than the Republican party, and have to hold my nose to vote for any of the candidates that the parties put up. I would very much appreciate voting for a candidate that would pursue diplomacy instead of war, pursue universal health care (so that families cannot be denied help because they are poor or be driven into poverty because a family member develops cancer), and will stay our of peoples' personal business. If we were to go to a fair taxation system, roll back the military spending and pork-barrel waste in Congress, etc, we could easily afford to fund SS and health care without running multi-trillion dollar deficits. Other industrialized nations do this - why is it impossible here? I cannot believe that somehow we are less capable, only that the powers that control our elected representatives are very successful in preventing legislation that would benefit the public to the detriment of their clients. Gold Member Astronuc said: There are several reasons to avoid unilateral discussions with NK, with the biggest being it gives the appearance to the world that NK is on the same level as the US. Other reasons include the fact the NK is a neighbor of China and Russia, so its appropriate they be involved. The US cannot afford unilateral negotiations with every single nation - it is just too impractical. Unilateral discussions with Canada and Mexcio make sense only because both nations border the US. Even NAFTA involved multilateral discussion with Canada, Mexico and US. It is appropriate that the UN plays a role, since what NK does affects all nations. It is a whole lot cheaper to send a diplomat, some translators, and support staff to have talks with the representatives of a country than to mobilize military force against them, or to bribe powerful people in our "allied" governments to stand with us. It is also a whole lot more humane, since it is less likely to end with human suffering. I would far rather pay for a large diplomatic corps staffed with career professionals (not political appointees) than pay for our bloated military. We should have the capability to protect ourselves, but it is a hell of a lot cheaper to prevent wars than to conduct them and pay for the aftermath. One of my dearest old friends was a Quaker, who died a few years ago. I learned a lot from that dear lady, and I miss her. Futobingoro A country can not have effective diplomacy without some sort of power (whether it be economic power, military power, etc.). Call it tribal. Call it primitive, but it's true. There is a reason why people listen to Condoleezza Rice. It is not because of what she says, it is whom she is saying it for. Mongolia, for instance, can probably train or hire diplomats of the same caliber as ones from the rest of the world, but Mongolian diplomats simply do not have as much authority backing them up. So even if a country's agenda does not include acts of aggression, it may still want to build up its military so it can posture itself advantageously at the bargaining table. State of Denial. russ_watters Mentor turbo-1 said: Do we reject diplomacy when the pool of participants is reduced to 2 parties? What you are describing is not "reject[ing] diplomacy", it is setting the terms under-which diplomacy is to occcur. Why? To "save face" that would be lost by negotiating with an enemy that the administration previously swore they would not negotiate with? Again, when, precisely, did Bush swear we would not negotiate with North Korea? And the way you are saying that implies you think Bush said he wouldn't negotiate with Bush under any terms. If there is one good reason why our diplomatic corps should not negotiate privately with NK (if only to stop the proliferation of ballistic missiles), please let me know. This administration has firmly rejected diplomacy in many instances... Could you please cite one for me? I would love for our allies and rival superpowers to forgive the slights of the Bush administration and join in meaningful negotiations with NK (and impose and enforce sanctions, if necessary), but the administration has got to make some conciliatory moves to our partners, and I don't see it happening. Huh? If NK refuses to have those talks, how can other interested parties hold them. Bush wants those talks and other interested parties want those talks. You seem to have admitted that in your second post. Heck, the very title of the article you posted is "Bush: World leaders united over N. Korea". You are contradicting yourself. Last edited: Bystander Homework Helper Gold Member --- as do the Russians, Chinese, S. Koreans, and Japanese reject bilateral talks between N. Korea and the U.S.. This thread is NOT about N. Korea --- it's about Bush, as have half the threads in P&WA been about Bush. If the U.S. begins a bilateral negotiation, the "whine cellar" gripe is going to be that "Bush is 'John-Wayneing' it, and ignoring the interests of the other interested parties in the region." "Heads," the libs win, "tails," Bush loses. Somebody wanta close this train-wreck? Russ said: Could you please cite one for me? Come now Russ, you know better than that. CharlieRose said: RICHARD HOLBROOKE: Iran is the crux of the problem. It has been for a long time. It has gotten increasingly dangerous. Bill and I certainly agree on that. My argument is that we should be prepared to talk to them directly, if possible. CHARLIE ROSE: Thats diplomacy. RICHARD HOLBROOKE: And that theres nothing wrong with diplomacy as long as its backed up by the readiness to use force if necessary. Now, as I listen to Bill and some of his colleagues, I have the feeling that, for reasons which are either real or politically driven, that, Bill, you and your colleagues, many of you seem to feel that the very word diplomacy connotes weakness. I simply dont share that. I believe -- and this is a response also to what Newt wrote -- I believe that you start with diplomacy backed up by the power and authority of the United States. Im talking now about Iran and Syria. Im not talking about Iraq. Iraq is a desperately difficult situation, which I hope we can address separately in a minute, because its the underlying problem. Many of the things -- I dont -- of course, I reject your comments about Clinton administration. He did plenty of very effective things. And by the way, the continuous negotiation with Syria and the in the Middle East, from Kissinger right through Colin Powell, were essentially productive in keeping the lid on in a very dangerous area. Thats my view, and I do not believe diplomacy equals weakness. And my own record -- and unlike you, Ive had the privilege of being shot at for my country in three wars, on three -- in many wars on three continents. Ive advocated the use of force and worked on targeting. I know what the situation is. Im ready to use force again if necessary any time to defend the national security. But you start with diplomacy. Thats the lesson of the Cuban missile crisis. Kennedy was ready to use force if he needed to, but you dont -- but you seem to say that diplomacy is weakness. And I just dont buy that. BILL KRISTOL: I dont say that, but lets look at the instances you just cited. We kept the lid on with our diplomacy with Syria. Yes, we also allowed Hezbollah to build up a very strong state within a state. We allowed a terrorist group to occupy a territory bordering on Israel. I dont think our diplomacy with Syria was successful over the last 10 years. I guess thats a difference we have. Im not against diplomacy in all cases. We have to judge it by results. I totally agree with that. RICHARD HOLBROOKE: Bill, can I just clarify so we at least narrow our difference. I would never claim that the diplomacy with Syria was successful in those terms. It succeeded in keeping the lid somewhat on. It kept the Syrians out of the conflict. And this goes to a core issue. Do you negotiate with people who are either in the evil empire -- Reagan negotiated with the Soviet Union and he was right to do so -- or the axis of evil. President Bush has said he will not talk to North Korea, to Iran, and now he says he wont talk to Syria. I can tell you for a fact that the Syrians are pleading for direct dialogue on an authoritative level. And for the administration to say publicly, as they have at every level, the Syrians know what they have to do and just live leave it at that is not responsible. That is part of diplomacy. But Im not afraid to use force. And you shouldn`t confuse me with strawmen that you create in your column. Where is it with Iran? Where is it with Syria? Where is it with North Korea? Where was it leading up to the war in Iraq? Ignoring weapons inspectors findings, and telling the world screw you, were going it alone. Very diplomatic. Diplomacy does not mean do as I say or else Ill bash your head in. That's not diplomacy, thats bullying, and the rest of the world isint going to take that crap anymore. The US has relations with plenty of other countries we dont agree with, so this whole notion that we cant talk to them is bull and we both know it. Last edited: http://www.democracynow.org/article.pl?sid=06/10/11/1430219 [Broken] DMN said: AMY GOODMAN: Professor Cumings, you just mentioned how A.Q. Khan had gotten nuclear material to North Korea. Three years ago, investigative reporter Seymour Hersh revealed that Pakistan was helping North Korea build the bomb. Hersh reported the CIA had concluded that Pakistan had shared sophisticated technology, warhead design information and weapons testing data with the Pyongyang regime. But according to Hersh, the Bush administration sat on the CIA report, because the White House didn’t want to divert the focus from Saddam Hussein, and Pakistan had become a vital ally in Bush's war on terror. BRUCE CUMINGS: Well, I think Seymour Hersh is right. Pakistan did have a nuclear Wal-Mart for North Korea, Iran and Libya, other countries. We did not punish Pakistan in any way for this, even though they were the worst proliferators by far in the world. And the Bush administration, when it came in, in 2000, was presented during the transition, by Clinton administration officials, with intelligence that North Korea had begun importing enriched uranium technologies from Pakistan, and they sat on it for 18 months until the preemptive doctrine was announced in September of 2002. James Kelly then went to Pyongyang the following month, in October of ’02, and confronted the North Koreans with this evidence of a second nuclear program. And the North Koreans, as they almost always do when confronted with their backs to the wall, said, “Fine, you know, we have it. We’ll see you later.” And they proceeded to kick out UN inspectors that had been on the ground for eight years, removed themselves from the NPT, the Non-Proliferation Treaty, and reopened their reactors. Furthermore, they got control of 8,000 fuel rods that had been encased in concrete for eight years, and that probably is the plutonium that would be at the basis of this bomb test. So, this was a complete and utter failure, because North Korea paid no penalty for jumping out of the NPT again, getting back their reactors. And the Bush administration continued to essentially argue inside the administration about whether to topple the regime or try and negotiate with it. So it was really quite a remarkable failure, and North Korea, let alone Pakistan, neither one of them, until now, has really paid much of a price for this. AMY GOODMAN: Yesterday, Arizona Senator John McCain gave a speech in Detroit, and he said, “I would remind Senator Hillary Rodham Clinton and other critics of the Bush administration policies that the Framework Agreement of the Clinton administration was a failure.” Explain what that Framework Agreement was. BRUCE CUMINGS: Well, it was an agreement that came after a very dire threat of war in 1994 that froze their entire plutonium facility at Yongbyon in North Korea. They had seals on the doors, closed-circuit television, and at least two UN inspectors on the ground, 24/7, all the time. So there isn't any possibility of that agreement having failed. It held for eight years and denied North Korea the plutonium that would have allowed them to make more bombs. Senator McCain is engaged in some sort of demagoguery here, because I don't know a single expert who would say that that Framework Agreement was not successful, at least for eight years, in keeping North Korea's plutonium facility shutdown. Now, the enriched uranium program is not even clearly a program for a bomb. It may be to enrich uranium for light-water reactors that were expected to have been built by the United States and its allies. But even if it is for a bomb, it’s much more difficult to enrich uranium to a weapons grade and create a uranium bomb than it is to create a plutonium bomb, plus they already have now, thanks to the Bush administration’s policies, the wherewithal for six to eight plutonium bombs, so in effect they don’t even need the other program. People say North Korea cheated. Wow, isn’t that really terrible? Kim Jong-il cheated. I don't know anyone who thinks that Kim Jong-il is a person who can be trusted, but I do know that North Korea kept that agreement made in 1994 and the U.S. did not. We pledged ourselves to normalize relations with North Korea. We didn’t do that. We pledged ourselves to build light-water reactors. They got started in 2002. So when you actually look at that agreement between country X and country Y, rather than the endlessly demonized North Korean regime, you see that we are responsible, as well as the North Koreans, for the current situation. But as far as Senator McCain is concerned, he is just flat wrong. It’s not a partisan question. It’s a question of knowing what that agreement was and whether it was carried out or not. Bruce Cumings is a professor of history at the University of Chicago. He is the author of several books on North Korea, his latest North Korea: Another Country and Inventing the Axis of Evil. He joins us in the studio from Ann Arbor, Michigan. Welcome to Democracy Now! Yeah, Bush wants diplomacy, right.... Last edited by a moderator: Skyhunter I can't say I am much impressed with his success, or lack there of. So what will his next step be? I shudder to even try and guess? russ_watters Mentor Gokul43201 said: Russ, what turbo is saying is that Bush continues to reject bilateral talks with DPRK. turbo-1's statements were more general than that and they are self-contradictory. Ie, saying we want to "go it alone" while rejecting bilateral talks is self-contradictory. Bush is rejecting bilateral talks partly because we do not want to go it alone. We have a coalition and if we decide to do bilateral talks, we will be breaking that coalition to "go it alone". Last edited: russ_watters Mentor cyrusabdollahi said: Come now Russ, you know better than that. ....quote..... What does that quote have to do with North Korea? Where is it with Iran? Where is it with Syria? [snip] Where was it leading up to the war in Iraq? Ignoring weapons inspectors findings, and telling the world screw you, were going it alone. Very diplomatic. What do those have to do with North Korea? Where is it with North Korea? It has been covered in the thread. Do you have an argument as for why the lines of reasoning already argued are flawed? Diplomacy does not mean do as I say or else Ill bash your head in. That's not diplomacy, thats bullying, and the rest of the world isint going to take that crap anymore. Turbo-1, you really need to get off your soap-box and start paying attention to what actually happened. The US did negotiate multi-laterally with North Korea and a compromised was reachd. The next day, North Korea broke that agreement and demanded a nuclear reactor be provided for them before they would come back to the table. Like you said: that's not diplomacy, thats bullying. You're lookign at this issue backwards. Turbo-1, it really looks like you jumped-into a rant without even knowing the history of what acutally happened. Could you comment on that actual history (described above) so I at least know you are aware that it happened? russ_watters Mentor Skyhunter said: I can't say I am much impressed with his success, or lack there of. Indeed - he's been busy dealing with a country that Clinton had 8 years to do something about, with little success. :uhh: edit: btw, that's three years (plus a few months) since the current situation started with North Korea pulling out of the NPT. Last edited: Gold Member There are dymanics here that are important to NK but are glossed over in our press. Technically, we are still at war with them, and have not recognized them diplomatically. Engaging in one-to-one talks, even on a low level, will affirm that the US is dealing with them as a nation, not a rogue territory. This may be enough to gain some concessions at the outset - at least an agreement to suspend fuel-enrichment programs (whether they are effective or not). Anyone who has followed the tortured reasonings for and against particular diplomatic initiatives with Taiwan and mainland China might appreciate NK's position. At this point, the US treats NK as a rogue territory, with China as its main supporter, and China views Taiwan as a rogue territory with the US as its main supporter. There are similar alliances all over the world. If the present administration had any creativity at all, they would ask our main trading partner (PRC) to engage in productive one-on-one diplomacy with Taiwan in exchange for us engaging in one-on-one diplomacy with NK. Sometimes we have to give a little to get a little, and the Chinese and the North Koreans know this. It's better to be talking with these people than to be rattling sabres and throwing down challenges. The stakes are too high. Diplomacy and negotiation have critical roles in foreign policies, and the present administration refuses to use these tools. Sometimes, internal pressures in a country can be leveraged in a negotiation to the extent that the the country's leadership is pressured into accepting things that we want and that they adamantly oppose. Suppose that the US agrees to help ease NKs critical shortages of foods, medicines, etc, in exchange for their cooperation in curtailing nuclear programs and in converting their economic base to the production of consumable products instead of dumping all their money into the military? If the NK people knew of such an initiative, it would be hard for Kim to rally them against it. If we could help stabilize the Korean Pennisula, gain a trading partner, and reduce Kim's dictatorial stranglehold on NK, would we not all benefit? Last edited: russ_watters Mentor Here is some info on the six-party talks: http://en.wikipedia.org/wiki/Six-party_talks Here is some info on North Korea breaking up the six-party talks and attempting to break up the coalition: http://www.cnn.com/2005/WORLD/asiapcf/09/19/korea.north.talks/index.html [Broken] North Korea said Tuesday it would begin dismantling its nuclear program only if the United States provides a light-water reactor for civilian power. The demand could threaten a day-old agreement between North Korea and the five nations involved in nuclear disarmament talks. A timeline of North Korea events: http://news.bbc.co.uk/2/hi/asia-pacific/1132268.stm An article discussing N Korea's demand for bilateral talks and why they would do that (to break up the coalition): http://www.washingtonpost.com/wp-dyn/articles/A16214-2005Feb11.html North Korea's request for direct talks appears to be aimed at trying to split the fragile unity of its bargaining partners.... Throughout the two years of talks, North Korea has sought to win upfront, direct benefits from the United States as a condition for agreeing to end its nuclear programs. Despite pleas from South Korea, the Bush administration has refused even symbolic gestures until North Korea gives up its programs and its claims are verified by U.S. intelligence...... Another Asian official said the predominant view in his government is that this is a negotiating ploy, particularly because North Korea's negotiating partners had made it clear Pyongyang needed to make a counteroffer to the U.S. proposal. But he said there is a minority view that North Korea will not give up its weapons and thus a change in tactics is necessary. Essentially, North Korea is demanding concessions before it will even go back to the bargaining table it was at before. Rediculous. Last edited by a moderator: russ_watters Mentor turbo-1 said: There are dymanics here that are important to NK but are glossed over in our press. Technically, we are still at war with them, and have not recognized them diplomatically. Engaging in one-to-one talks, even on a low level, will affirm that the US is dealing with them as a nation, not a rogue territory. Again, you are mischaracterizing the history. The action in Korea is now and was always a United Nations action. http://en.wikipedia.org/wiki/Korean_War And the words "recognized them diplomatically" are misleading at best. We certainly recognize that they are a country with a right to exist. We don't have an embassy there for the same reason they have no embassy here. At this point, the US treats NK as a rogue territory, with China as its main supporter, and China views Taiwan as a rogue territory with the US as its main supporter. There are similar alliances all over the world. If the present administration had any creativity at all, they would ask our main trading partner (PRC) to engage in productive one-on-one diplomacy with Taiwan in exchange for us engaging in one-on-one diplomacy with NK. What?!?! Those situations are nowhere near analagous. For starters, the US doesn't claim ownership of North Korea. Sometimes we have to give a little to get a little, and the Chinese and the North Koreans know this. No, the North Koreans are demanding concessions before they will return to the negotiating table they left. It's better to be talking with these people than to be rattling sabres and throwing down challenges. You are reading your newspaper upside-down again. The US is not issuing challenges and demands, North Korea is. The US has stated explicitly that we will return to that negotiating table with no pre-concessions by either side. What could be more fair than that? Diplomacy and negotiation have critical roles in foreign policies, and the present administration refuses to use these tools. Simply not true and you now must know it. The US was talking and North Korea left the talks. And an agreement was made in one round that NK went back on the very next day. Who is not negotiating in good faith? Suppose that the US agrees to help ease NKs critical shortages of foods, medicines, etc, in exchange for their cooperation in curtailing nuclear programs and in converting their economic base to the production of consumable products instead of dumping all their money into the military? That is precisely what we are trying to do and what NK agreed to do before they renounced their own negotiated position. If the NK people knew of such an initiative, it would be hard for Kim to rally them against it. NK already agreed to it, but the NK people know only what Kim tells them. If we could help stabilize the Korean Pennisula, gain a trading partner, and reduce Kim's dictatorial stranglehold on NK, would we not all benefit? Certainly. How do we do that? Will assisting in NK's nuclear program prior to them sitting back down at the negotiating table help...? Bystander Homework Helper Gold Member turbo-1 said: (snip)Suppose that the US agrees to help ease NKs critical shortages of foods, medicines, etc, in exchange for their cooperation in curtailing nuclear programs and in converting their economic base to the production of consumable products instead of dumping all their money into the military? If the NK people knew of such an initiative, it would be hard for Kim to rally them against it. If we could help stabilize the Korean Pennisula, gain a trading partner, and reduce Kim's dictatorial stranglehold on NK, would we not all benefit? "If pigs had wings ...." Concatenations of longshots aren't viable alternative policies. You assert, "Diplomacy and negotiation have critical roles in foreign policies, and the present administration refuses to use these tools," from a privileged position within the state department? Or, are you guessing, based on Bush not doing what you think you would do in his position, and on the fact that he hasn't consulted you personally for guidance on the matter? You have no idea what offers have been made. I have no idea. Russ has no idea. We ain't in "the loop." Complaints about the performance of government that are based entirely on speculation are pointless. State a problem; discuss the possibilities, discuss the circumstances, but don't go guessing what is and what ain't and proceeding to conclusions. Russ said: What does that quote have to do with North Korea? It has to do with what you questioned turbo about, namely: turbonium said: This administration has firmly rejected diplomacy in many instances... And you replied: Could you please cite one for me? More importantly, it demonstrates a clear pattern that this administration does not believe in true diplomatic negotiations to reach their objectives. (let's not get into a he said, she said, because thats pointless ) Russ said: Turbo-1, you really need to get off your soap-box and start paying attention to what actually happened. The US did negotiate multi-laterally with North Korea and a compromised was reachd. The next day, North Korea broke that agreement and demanded a nuclear reactor be provided for them before they would come back to the table. Like you said: that's not diplomacy, thats bullying. You're lookign at this issue backwards. Turbo-1, it really looks like you jumped-into a rant without even knowing the history of what acutally happened. Could you comment on that actual history (described above) so I at least know you are aware that it happened? No, that was me who said it :tongue2:. Thank's for that info, as I was not aware. Can you put a date to that fact so I can put it into prespective? Did you see the transcript I provided? People say North Korea cheated. Wow, isn’t that really terrible? Kim Jong-il cheated. I don't know anyone who thinks that Kim Jong-il is a person who can be trusted, but I do know that North Korea kept that agreement made in 1994 and the U.S. did not. We pledged ourselves to normalize relations with North Korea. We didn’t do that. We pledged ourselves to build light-water reactors. They got started in 2002. So when you actually look at that agreement between country X and country Y, rather than the endlessly demonized North Korean regime, you see that we are responsible, as well as the North Koreans, for the current situation.
2021-09-29 02:38:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28589627146720886, "perplexity": 3220.1621320470454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780061350.42/warc/CC-MAIN-20210929004757-20210929034757-00195.warc.gz"}
https://engineering.stackexchange.com/questions/2265/why-does-the-microwave-plate-start-in-a-random-direction?noredirect=1
# Why does the microwave plate start in a random direction? ...or what type of motor is used there? I found this type of motor - usually powered with low-voltage AC (~12V), but at times with 230V, in several appliances that require very slow rotation and sometimes a fair momentum - a color-shifting lamp, the microwave plate, an ice cream mixer... The funny property of it is it picks the start direction at random and keeps spinning in that direction until switched off - but I never faced a situation when it would get stuck in the "unstable balance" position. So, what is this type of motor and why does it behave that way? • What do you mean by "it picks the start direction at random"? Do you mean that an individual motor will rotate one direction the first time you switch it on, then another way the next time you turn it on, and you don't understand how it chooses the direction? Mar 31 '15 at 20:36 • @AdamMiller: Yes; I stop the microwave and find the hot cup near the far end. I start it to rotate it closer to the door and half of the times it will keep rotating in the original direction, the other half it will reverse. I once tried to determine the rule, checking whether it remembers the prior direction and reversed it, but the choice between "clockwise/anticlockwise" seems to be entirely random. – SF. Mar 31 '15 at 20:44 • I don't think this is a general truth of all microwaves. What brand/model do you have? Mar 31 '15 at 21:04 • @ChrisMueller: Clatronic MW 721, although for the sample size of 4 different models where I paid attention to it, all 4 exhibited this behavior (but I won't find the models now). I took one apart when it broke down and the motor was a very short, wide cyllinder (about 2cm height, 5cm diameter). I found a very similar motor in a fancy "optic fibre lamp", rotating a colorful, transparent disk between the bulb and a bunch of fibres fanning out from the top, so that their tips shone with colors changing over time as various colors on the disk would filter the light. – SF. Mar 31 '15 at 22:03 • (since the motor was buzzing in an annoying way my mother asked me to disable or remove it, so that the fibres shine just with white light, but the lamp stays quiet, that's why I took it apart.) One more thing, the motor axis is off-center from the cylinder. (I suspect there are some gears inside). – SF. Mar 31 '15 at 22:05 The motor tends to be a cheap Synchronous AC motor. The design uses the shift in AC polarity (Going from Positive to Negative phases and back) to create a magnetic field in a coil, which interacts with a multi-poled permanent magnet. As the magnetic polarity shifts in coil, the magnet moves accordingly (opposites attract). Once it is moving, it's easier for the magnetic poles to attract. The permanent magnet is attached to a shaft, which goes through multiple gears to reduce rotations and increase the torque. First, the middle of the motor is a plate. Underneath it is a coil in plastic bobbin. Now notice the hole marked 1. It has fins. Some come from the bottom of the motor housing, some from the plate that's hiding the coil. That plate will take the magnetic field from the top of the coil and pass it to the fins connected to it. The bottom housing will take the magnetic field from the bottom of the coil and pass it to the fins connected to it. These alternating fins create the Stators of the synchronous motor. The coil and fins can be seen in this video: There are two reasons for the motor to change direction. The first, is that the motor is cheap and nothing was added to force it to go in one direction. Typically more expensive motors, one of the gears will have a stop notch, that will prevent it from going backwards. It would stall the motor for one half of the AC phase, then continue the way it should, if it starts the wrong way. The more relevant reasons is twofold. One, the fins that make the stators of the motor are not equal size. This is to prevent the motor from getting stuck, moving back and force from equal torque. (If you push a car one way, and then push it back the other way the same exact force and distance, the car will never move from that spot, just rock back and forth gently). Since the permanent magnet can stop between these uneven sized fins, next time it starts, it will get pulled one way or the other. And since the motor could start anywhere on the AC phase, so depending on how the magnet is facing compared to the stator magnetic field, it could be pulled one way or the other. TLDR cheap motor with no directional stopping gear, loose tolerances, uneven fin/stator sizing, and uncertain AC Positive or Negative starting phase leads to the motor randomly starting in either direction Three types of motor may be used, both of which could do this. One of these (the Synchronous motor) is wat is used here and is a subset of the Brushless DC motor. (A misnomer as there is no pure DC used in the motor proper in a BLDCM). The actual motor type is a synchronous motor, identified correctly by jpa. The synchronous motor is a special-case of the BLDCM (Brushless DC motor) that I describe below. In the general case a BLDCM generates an AC field from a DC source - either a fixed frTequency field that the rotor follows at fixed speed, OR from a variable frequency a source whose frequency is based on the current rotor speed and applied in such a way that the rotor "chases" the field which is derived from its own motion. (Phase lead/lad allows speed change - another subject). In the synchronous motor seen here there is a coil with winding axis vertical when motor sits flat on a surface. The coil connects to (in this case low voltage AC via a transfomer from) the AC mains so alternatively produces N-S or S-N magnetisation along its axis. Poles are created by adding plates with multiple radial tabs - each tab is a pole. As the coil changes NS, SN, NS the alternative tabs are all N or all S and as the field changes the NSNSNS... patterm moves in steps around the circumference. The rotor has N and S permanent magnet poles. These initially aligh in opposite phase wrt the stator poles and when these reverse polarity the rotor is attracted AND repulse to a position one tab away. However, if fully symmetrical, a N pole on the rotor could be attracted to the S to its "left" or the S to it's right. Ince rotating it will have a preference for the pole in its direction of motion but, as startup, could go either way. And does. Stator pole polaritiesreverses succesively NSNSNS ... SNSNSN ... NSNSNS ... Rotor follows stator changes (1) From here NS <- rotor in position 3-4 SNSNSNSN <- Stator (2a) To here is valid NS <- rotor moves left to position 2-3 NSNSNSN <- Stator changes polarity from (1) (2b) But, so is: NS -> rotor moves right to position 4-5 NSNSNSNSN <- Stator changes polarity from (1) In this case there is no DC - the field is suppluied from AC mains and the rotor "chases" the rotating AC field. Motor types: (1) Most usual in the past - Traditionally a "shaded pole" motor may be used where a "bodge" is used to distort the magnetic field from a field winding in such a way that a rotating magnetic "vector" is produced that the rotor follows. A magnetic shunt is produced with a turn of conductor at the airgap in the steel core that the field coil is wound on. When power is first applied the rotor position relative to the airgap will cause it to be jerked in one or other direction and once motion has started the rotating field that results reinforces that motion. Shaded pole motors are simple, cheap, and have been around for almost ever. Excellent laymans introduction to shaded pole motors - You tube video. 8 minutes. (2) A brushless DC motor (BLDCM) may be used. The synchronous motor described above is a special case simple subset of a BLDCM. In both cases a permanent magnet rotor follows a rotating AC field. In a 'true' BLDCM the fe\ield is usually generated electronically by switching DC. In these simple synchronous motors the rotating field is supplied from the AC mains via a transformer. Motors that need a clean fast start use magnetic sensors that give absolute feedback on direction and speed. Motors that must rotate the right way (eg disc drive motor) may use sensorless systems that derive back EMF voltages from the motor windings BUT circuitry is included to check rotation and adjust the powering if direction starts wrongly. Systems that do not care about direction and that want lowest cost just use a sensorless system and accepts what comes. • Most likely (1) unless someone hid a rectifier circuit inside the casing - these were AC motors (making it especially surprising; most AC devices are for mains voltage, and if it's 12V it's DC. In this case it was 12V AC (as written on the label on the motor, along with RPM speed of some.... 5?). – SF. Apr 1 '15 at 5:20 • ...I've checked the links you provided and it seems a shaded pole motor is reversible only by mechanical modification (flipping the stator). Normally, if you apply AC it will always start in the same direction - so unless that's some obscure variant, that's not it. – SF. Apr 1 '15 at 16:58 • I had a (very) old clock like this. There was a little knob in the back who's sole purpose was to spin it the right way after it was plugged in if it started the wrong way. You could reach behind it when nobody was looking and spin it the other way, and the second hand would start moving backwards at an otherwise perfect speed. – uhoh Oct 31 '16 at 6:29 • Unrelated to this type of motor, but I had a battery wall clock that was standard EXCEPT the mechanism ran anti-clockwise. The time could be read easily enough once you realised what had been done, but it was otherwise completely confounding. Oct 31 '16 at 10:07 It is a synchronous AC motor. It will spin at a precise rate relative to the AC frequency (50 Hz or 60 Hz). This is useful to keep the spin rate constant under varying loads, such as in a microwave oven.
2021-09-26 12:35:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4099794924259186, "perplexity": 1360.5583121688426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057861.0/warc/CC-MAIN-20210926114012-20210926144012-00658.warc.gz"}
http://math.stackexchange.com/questions/303883/probability-space-defined-by-function-from-x-to-0-1
# Probability space defined by function from $X$ to $[0,1]$ Let $X$ be a non-empty countable set. If there is a function $f:X\rightarrow [0,1]$ such that $p(S)=\sum_{x\in S} f(x)$ for all $S \in 2^X$, then prove that $(X,2^X, p)$ is a probability space. My partial answer: Let $\{A_n\}_{n\in \mathbb{N}}$ be a collection of disjoint subset of $X$. Let $A_n$={a_n^{(k)}: $k\in \mathbb{N}$}, then $$p\left(\bigcup_{n=1}^{\infty} A_n\right) = \sum_{x\in \bigcup_{n=1}^{\infty} A_n} f(x)= \sum_{x\in \bigcup_{n=1}^{\infty} A_n} p(\{x\})=\sum_{n=1}^{\infty} \sum_{k=1}^{\infty} p(\{a_n^{(k)}\})= \sum_{n=1}^{\infty} p(A_n).$$ Hence, $p$ is $\sigma$-additive. However, I have no idea how to prove $p(X)=1$. Thanks. - As stated, $p(X)=1$ doesn't follow (neither $p(X)<\infty$). Isn't there any assumption on $f$? (E.g. $\sum_{x\in X} f(x)=1$) For a counterexample, let $X$ be infinite and $f(x)=1$ for all $x$. -
2015-12-02 02:10:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9980278015136719, "perplexity": 91.84673606194738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398525032.0/warc/CC-MAIN-20151124205525-00212-ip-10-71-132-137.ec2.internal.warc.gz"}
http://delleparoleguerriere.it/blao/stack-divs-horizontally-bootstrap.html
Create a mobile, touch-swipe bootstrap 4 carousel that looks amazing on any devices and browsers. twitter-bootstrap. Viewed 1k times 5 \$\begingroup\$ I am developing a form on bootstrap, and I'd like to know if what am doing is best practice and a clean code or not. Here in this tutorial we are going to explain how you can center align a form in Bootstrap. The most important point in above example is "width: 100px; float:left;" -this is what causes DIVs to go side by side. The Bootstrap pricing table is almost similar to some business pricing table templates nowadays. You will learn about responsive design and the Bootstrap grid system. This is why we've decided to share our views on choosing the technology stack for web application development and the criteria we use in the process. This slider is now avalable with our Free Website Creator!. It’s been several months since our last update, but the size of this update should help get us back on track. The following example shows a simple "stacked-to-horizontal" two-column layout, meaning it will result in a 50%/50% split on all screens, except for extra small screens. The best free panel snippets available. The auto value automatically adjusts the left and right margin and horizontally center align the div element's block box. This places a thin line around each image (as it uses framebox ). Learn how to accomplish this. How can the example images below can be. De ese modo podrías hacer que la columna de la izquierda tenga la misma altura que la columna de la derecha (que parece ser más grande), y a partir de ahí usar posicionamiento para que el botón aparezca abajo del todo (que es donde aparece el botón de "Confirmar venta"). Meaning of numbers in "col-md-4"," col-xs-1", "col-lg-2" in Bootstrap The grid system in Bootstrap helps you to align text side-by-side and uses a series of container, rows and column. Stacking divs horizontally | CSS Creator Csscreator. It’s the most current, in-depth and exciting coding course—to date. I want to fill in all the gaps when the divs are displayed. I'm use to working with Foundation and I'm currently working with Bootstrap. Use icon-* class to define the icon for the form control. In an inline form, all of the elements are in-line, left-aligned, and the labels are alongside. These tags are primarily used as "hooks" for your CSS. Mobile First Strategy. That's a good thing! I've used WordPress since day one all the way up to v17, a decision I'm very happy with. Html - horizontally aligning multiple divs (CSS) - Stack Stackoverflow. I can't figure out what is the problem with my code. table-responsive{-sm|-md|-lg|-xl}. Bootstrap is one of the best and most used HTML/CSS/JS front-end frameworks. Divs exceeding 3 align in bottom row in same part of page. The following example shows a simple "stacked-to-horizontal" two-column layout, meaning it will result in a 50%/50% split on all screens, except for extra small screens, which it will automatically stack (100%):. Category: None Tags: jquery-ajax, android, firefox, twitter-bootstrap, jquery-animate, radio-button, The problem might be caused by IE's z-index stacking issue as described in this answer to a related question. I run a marketplace with 117 cards, ordered (in Angular) by number of 'upvote. Check out the latest version of Bootstrap! Toggleable, contextual menu for displaying lists of links. ; white-space: nowrap; property is used make all div in a single line. div_row{ border-bottom: 1px solid #777; }. Now UI Kit Angular. Points: 0. Closed 2 years ago. As Flexbox children, the Columns in each row are the same height. I want them to align horizontally. com Hi all, I need a way to solve a positioning problem that is quite tricky to calculate serverside, so am hoping css will help me. How to place two div side by side with bootstrap 3 [Answered] RSS. It will auto hide the ul elements according to the browser window width. In 2013, Bootstrap 3 was released with a mobile-first approach to design and a fully redesigned set of components using the immensely popular flat design. Meaning of numbers in "col-md-4"," col-xs-1", "col-lg-2" in Bootstrap The grid system in Bootstrap helps you to align text side-by-side and uses a series of container, rows and column. 8 FULL STACK MASTERCLASS 45 AI projects 4. Here are some more FAQ related to this topic: How to set the height of a DIV to 100% using CSS; How to make a DIV not larger than its contents using CSS. 15 == === Changes since 1. 12 Column Bootstrap 4 Flexbox Grid. The above code states that the TOP margin and Bottom margin are set to 0 and LEFT margin and Right margin set to auto (automatic). Using the sample you can see how the Bootstrap grid and fluid grid behave inside and outside of the Bootstrap container. Python Language. css file followed by your website style sheet. Ask Question Asked 4 years, 7 months ago. If you have any questions, please leave it in the comments below! Don't forget to like and subscribe! Follow me on. Centering a div within a div, horizontally and vertically. The Bootstrap grid has multiple tiers (AKA breakpoints), media queries, and fluid column widths for a reason… As you may know Bootstrap. Stone River eLearning was founded in 2011 and has since taught over a quarter of a million students. But the important point here is that when the user views this on an iPad or iPhone, these three "Hi" items will stack on top of each other, so any added space via height or min-height will look pretty bad. col-xl-* and only use the. Bootstrap is the most popular HTML, CSS, and JS framework in the world for building responsive, mobile-first projects on the web. The container is the root of the Bootstrap 4 grid system and it is used to control the width of the layout. Chris on Code @chrisoncode September 29, 2014 0 Comments Views Code Demo Bootstrap has a great many features. In this example, I am using Bootstrap’s width utility class of. Working of each grid system in bootstrap is exactly the same. Note: The value of width needs to be set for. Stack Overflow for Teams is a private, secure spot for you and your coworkers to find and share information. Learn how to align images side by side with CSS. I'm creating a horizontal menu bar. This easy web design software comes with 1800+ awesome website blocks: image galleries, lightboxes, image sliders, bootstrap carousel, counters, countdowns, full-screen intros, features, data tables, pricing tables, progress bar, timelines, tabs, accordions, call-to-action, forms, maps, social blocks, testimonials, footers, and more. looking like this:. Cards are ultimately panels and panels sometimes look nice if laid out horizontally. Now UI Kit Angular. ) is easily accomplished using Bootstrap 4's Flex and Auto-margin Utilityclasses. Use MathJax to format equations. Bootstrap Tabs: Summary. Bootstrap was created by the Twitter team for the purpose of working faster on a standardized interface, which makes Bootstrap a purpose made-tool, one that's not completely aligned with standard concepts of web development. At the end of this module, you need to complete your first assignment. Agregue esas clases con colores y linkie el css de bootstrap 4. One of my field is out of position. Z-index is the CSS style that controls how images or divs stack over each other. You need to set the width of the element for this to work. Using the * html hack for IE, I float the left and middle columns to the left and then float the right column to the right. I'd recommend using H5BP or Bootstrap directly instead. Bootstrap Grid System is mobile-first fluid grid system which is made up of a series rows and columns to provide a structure to website and to place it’s content in the intersected areas easily. The short answer is: apply the CSS margin:0 auto to the div element which you want to make center align. Bootstrap Vertical and Horizontal Divider - Dividers are basically used to create line which works as separator. The layout that I want is like Enquiry No : (Textbox here) Enquiry Date :. On the other hand, this should not be that surprising: Bootstrap is all about mobile first and you probably don't want to show all the items in your navigation bar next to each other on a mobile device with a limited screenwidth. At the end of the tutorial, you can download all the files as handy templates. Points: 0. Solo te toma un minuto registrarte. The xs tier still exists in Bootstrap 4, but use of the -xs infix has ceased. The simplest way to do this is to use the CSS float property, Centering a div or any other block-level element horizontally is a special case for CSS layout,. Bootstrap Tabs: Summary. Everything in a website revolves around the content. The initial load will not have Bootstrap in place, so all the grid columns pile on top of each other. Ask Question Asked 6 years, 1 month in a div. By default, the navbar takes up the whole width of the screen and acts like a Bootstrap 4 flex container. One thing you definitely don't want to do is have clear:both set on the divs. From what I've read on Bootstrap, there doesn't appear to be anything that does this. Divs which are floated left will naturally push onto the 'line' below after they read the right bound of their parent. This is why we've decided to share our views on choosing the technology stack for web application development and the criteria we use in the process. Consult this Stack Overflow question as it. Bootstrap 4 is currently the most popular front-end framework in the world. col-size-* and only use the. Centering a div within a div, horizontally and vertically. table-responsive. Bootstrap 4 Containers – Bootstrap provides container class to create containers which is used as wrapper for the other elements. In an inline form, all of the elements are in-line, left-aligned, and the labels are alongside. If you have any questions, please leave it in the comments below! Don't forget to like and subscribe! Follow me on. I can't figure out what is the problem with my code. col-xl class on a specified number of col elements. Update the question so it's on-topic for Webmasters Stack Exchange. Q&A for WordPress developers and administrators. Children Learn the difference on Descendants and Children here, CSS Child vs Descendant selectors (Never mind the post beeing about CSS, this is a generic pattern). ; white-space: nowrap; property is used make all div in a single line. Bootstrap Tutorial - Align label and control in same line. (to remove a border to fit a different ratio, it's necessary to cut only horizontally, or only vertically, but not both). How to create side-by-side images with the CSS float property: How to create side-by-side images with the CSS flex property: Note: Flexbox is not supported in Internet Explorer 10 and earlier versions. I'm trying to use Bootstrap as the layout tool for the theme, but for some Stack Exchange Network Stack Exchange network consists of 175 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. As Flexbox children, the Columns in each row are the same height. At the end of this module, you need to complete your first assignment. Set the content of the label horizontally and vertically centered in Java; How to left align components vertically using BoxLayout with Java? Vertically stack tabs with Bootstrap. I put them all on different rows and nothing happened. Is it possible to stack divs of different heights horizontally then vertically. Learn how to align images side by side with CSS. Bootstrap 4 Navbar Brand. Bootstrap Builder provides a real-time visual design environment for the popular Bootstrap 3 and 4 front end frameworks. A ver si me podéis echar un cable. Using the styling displayed above each image is stacking vertically one below the other. module('myModule', ['ui. Tabular data is tricky to display on mobiles since the page will either be zoomed in to read text, meaning tables go off the side of the page and the user has to scroll backwards and forwards to read the table, or the page will be zoomed out, usually meaning that the table is too small to be able to read. What exactly do responsive frameworks perform-- they deliver us with a handy and functioning grid environment to place out the content, making sure if we define it right and so it will operate and present properly on any kind of gadget despite the sizes of its screen. 1 are not fully responsive. I teach responsive Web design with and without Bootstrap at ed2go, and am the author of a number of books on responsive Web design (you can google me and find them easily). To help you fine tune responsiveness for these four types of screens, Bootstrap divides the width of web viewing devices into the 12-column grid system. HTML and CSS seems to be straightforward to him. The simplest way to do this is to use the CSS float property, Centering a div or any other block-level element horizontally is a special case for CSS layout,. Bootstrap Vertical and Horizontal Divider - Dividers are basically used to create line which works as separator. However, it appears that an object can't float to the left of another object floating left. Bootstrap's Inline form layout can be used to place the form controls side-by-side in a compact layout. Vertical align in bootstrap 3. Hi guys, I am trying to display two forms with several elements on a page. If these full width columns would have also float: left; I could add as many columns inside row as I want. Bootstrap; Carousel; Align container with 3 divs horizontally. This will resolve this particular issue but you may encounter it for other tags such as p as you add them. Which is irrelevant, as Bootstrap is much more than a grid system. You might want to look up some CSS to see how you can manipulate with Divs. However, using it with bootstrap might be problematic (to b. Ayuda con alineación de divs y bootstrap con css. Stack Exchange network consists of 175 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Update the question so it's on-topic for Webmasters Stack Exchange. Understanding vertical-align, or "How (Not) To Vertically Center Content" A FAQ on various IRC channels I help out on is How do I vertically center my stuff inside this area? This question is often followed by I'm using vertical-align:middle but it's not working!. If you do want to use SLDS, you need to accept its' design philosophy. Bootstrap’s Inline form layout can be used to place the form controls side-by-side in a compact layout. Then you will learn the basics of Bootstrap, setting up a web project using Bootstrap. Estos dos divs tienen distintos tamaños tanto de ancho como de alto, y quería alinearlos a ambos en el centro del div contenedor. 1 === * The installer now includes a check for a data corruption issue with certain versions of libxml2 2. How to disable bootstrap font. TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. Define the position of the icon using has-icon-left or has-icon-right class. but there are some JavaScript options for making same height divs. Probably the most advanced jQuery pagination plugin. This example shows how to adjust the Orientation of content within a StackPanel element, and also how to adjust the HorizontalAlignment and VerticalAlignment of child content. Centering content vertically is difficult almost always. The tech stack for this site is fairly boring. The following values vertically align the element relative to the entire line: Aligns the top of the element and its descendants with the. HTML forms are the integral part of the web pages and applications, but creating the form layouts or styling the form controls manually one by one using CSS are often boring and tedious. The modern approach, use display: flex on the parent container. Together with further augmenting functions such as the width slider, custom breakpoints, global content updates, customizable prebuilt components and more, this leads to a greatly improved design workflow where creativity thrives. The above code states that the TOP margin and Bottom margin are set to 0 and LEFT margin and Right margin set to auto (automatic). Cards are ultimately panels and panels sometimes look nice if laid out horizontally. Bootstrap is good for creating consistency across projects & teams. form-horizontal class to the form tag to have horizontal form styling. Build responsive, mobile-first projects on the web with the world's most popular front-end component library. Another aspect that we would like to point out is the effect of the technology stack on the project cost. Centering 4 div boxes horizontally. Learn how to use jQuery to quickly work with the DOM. The following values vertically align the element relative to the entire line: Aligns the top of the element and its descendants with the. It’s the most current, in-depth and exciting coding course—to date. I'm very new in bootstrap my problem is kinda trivial. Use the *-column-width property to change the width of the panels. Define the position of the icon using has-icon-left or has-icon-right class. floating left/right doesn't make a difference. nav-stacked class:ExampleLive Demo. I would like the image to be to the left of the title, and for both of these elements to be centered both horizontally and vertically in the div. Create a mobile, touch-swipe bootstrap 4 carousel that looks amazing on any devices and browsers. Here are some more FAQ related to this topic: How to set the height of a DIV to 100% using CSS; How to make a DIV not larger than its contents using CSS. Column Height In Bootstrap 3. In fact floats are weird period because they topple over like that. I'm working on a site's wireframe in this days with a responsive grid that I make up but after a week I'm starting to think that maybe it's best for me to use a responsive grid that have the exact same measures of bootstrap. The items in the navigation bar are shown as vertical rows instead of horizontally next to each other. Specifically, we: Set background-color: #fff; on the body; Use the @font-family-base, @font-size-base, and @line-height-base attributes as our typographic base; Set the global link color via @link-color and apply link underlines only on :hover; These styles can be found within scaffolding. Note: In this example, all the blocks except the non-positioned one are translucent to show the stacking order. I am full stack developer with more than 5+ years of experience on web development. From basic CSS styling to popular frameworks like Bootstrap, this training will stack your resume with skills that will check every box on your dream job description. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview Questions. The relevant html and css. module('myModule', ['ui. If you've ever tried it, it is hard - especially if you want to avoid using tables. Note, though, that layout options in Bootstrap 4. Thus, a paragraph is a vertical stack of line boxes. The left column I float:left, the right column I float:right, but its the. In the above link, I'm using bootstrap. However, it appears that an object can't float to the left of another object floating left. Viewed 1k times 5 \$\begingroup\$ I am developing a form on bootstrap, and I'd like to know if what am doing is best practice and a clean code or not. Using the sample you can see how the Bootstrap grid and fluid grid behave inside and outside of the Bootstrap container. Your options for Bootstrap navigation display are not limited to just links: you can also use tabs and pills on your website. The Complete Web Developer Course: Build 14 Websites, Become a Full Stack Coder with 28 Hours of Instruction on HTML, JavaScript, MySQL & More. A Bootstrap card is a kind of container for content. Feel free to rate, comment, ask questions and subscribe!. improve this answer. Div side by side, float div boxes, floating div boxes, aligning div boxes, float div box. Making statements based on opinion; back them up with references or personal experience. I’m an American full stack developer with significant experience with classic backend stacks and front-end frameworks including AWS , Django , Angular , React , and WordPress. However, there is an alternative with display: inline-block…. Ayuda con alineación de divs y bootstrap con css. The main drawback of this is that everything built with Bootstrap will have very similar looks. Here is the usual scenario with side by side forms in Bootstrap - they simply break down. Long story short, there are a few ways to do it: Set the div containers to either float: left or float: right. ZERO to GOD Python 3. Tengo dos divs dentro de un contenedor que ocupa el 100% de la pantalla. Then you will learn the basics of Bootstrap, setting up a web project using Bootstrap. One of my field is out of position. Bootstrap Web Development CSS Framework. "How to center a div" (inside another div or inside the body itself) is one of the most discussed questions ever, and to my surprise there are masses of tutorials which totally over-complicated this issue, usually also with lots of disadvantages, like a fixed height and width of the content div. Please ask questiions like these in Stackoverflow and you will get the better answers. Descendants should be node. Here's an example of a Bootstrap modal dialog on top of Pure, which you can see on Pure's extension page: In Closing. In order to get the Bootstrap 4 styling, you will need the basic class called. col-size-* and only use the. Bootstrap is the most popular HTML, CSS, and JS framework in the world for building responsive, mobile-first projects on the web. Also, the last version of Bootstrap i. Active 4 years, 7 months ago. Is it possible to use card-columns but order in rows? (Maybe call it "bootstrap-rows"?) I understand that technically, this is more challenging than top-down ordering because it doesn't require an algorithm to fill empty spaces when cards have different. I also leverage Jetpack for extra functionality and Local for local development. Bootstrap Vertical and Horizontal Divider - Dividers are basically used to create line which works as separator. It’s been several months since our last update, but the size of this update should help get us back on track. Last seen: 15 years 27 weeks ago. I had forgotten to import the slick. To align two divs horizontally in HTML, use the float CSS property with left value. How to disable bootstrap font. With CSS properties, you can easily put two next to each other in HTML. Although, technically, they can all be floated to the left. It's also important to mention that the. This horizontal form shows the use of icons with form controls. * (bug 20239) MediaWiki:Imagemaxsize does not contain anymore a. You Can use in-built class center-block to image align center in a container. “How to center a div” (inside another div or inside the body itself) is one of the most discussed questions ever, and to my surprise there are masses of tutorials which totally over-complicated this issue, usually also with lots of disadvantages, like a fixed height and width of the content div. Then you will learn the basics of Bootstrap, setting up a web project using Bootstrap. Custom radio button demo inline radio demo radio …. I have tried a different route by making only one row. Sobre Nós Saiba mais sobre a empresa Stack Overflow Se deseja que as imagens tenham um alinhamento semelhante envolva todas as divs em um contêiner e aplique as formatações da classe well a esse contêiner, Centralizar divs - Bootstrap. In Bootstrap 4, there is an easy way to create equal width columns for all devices: just remove the number from. What would be the best way to do this? At the moment I have a very botched solution where I have display:table-cell set on each of the DIVs. I'm trying to use Bootstrap as the layout tool for the theme, but for some Stack Exchange Network Stack Exchange network consists of 175 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. bootpag – dynamic pagination jQuery plugin. Released 6 years ago. Center Align Form In Bootstrap- Sometimes we need the centered Aligned form in bootstrap. The first element that usually sits in a navbar is the brand. This works with both locate_template() and get_template_part(), and has the added benefit removing duplicate items inside of get_template_stack(), resulting in avoiding an additional file system check should the parent and child themes be the same. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. You will learn about responsive design and the Bootstrap grid system. Toggle the action prop to create actionable list group items, with disabled, hover and active styles. Dismiss Join GitHub today. The tech stack for this site is fairly boring. Do you know if there is anything within the CSS3 specs that speaks to this as well? Reply. While using Bootstrap form, it is quite easy to create radio buttons with Bootstrap classes. Hi klbaiju, According to your description and codes, I found you have used bootstrap navbar. April 10, 2017, at 08:43 AM. In this example, I am using Bootstrap’s width utility class of. Long story short, there are a few ways to do it: Set the div containers to either float: left or float: right. With Bootstrap 3, we've rewritten the project to be mobile friendly from the start. Bootstrap Vertical and Horizontal Divider - Dividers are basically used to create line which works as separator. in an extension, make sure you link to the ui-bootstrap-csp. With the responsive CSS file added, the grid adapts to be 724px and 1170px wide depending on your viewport. (Here styling for each column is used. What would be the best way to do this? At the moment I have a very botched solution where I have display:table-cell set on each of the DIVs. If you want to do your own thing, you'll have to ditch the lightning:* elements and go with basic HTML or the ui:* elements. “How to center a div” (inside another div or inside the body itself) is one of the most discussed questions ever, and to my surprise there are masses of tutorials which totally over-complicated this issue, usually also with lots of disadvantages, like a fixed height and width of the content div. I used to achieve this by making the width of the 4 blue divs + their left and right margins = 100% but. Lawrence McDaniel Great things are done by a series of small things brought together. I have tried a different route by making only one row. In addition, trying to hit a radio button on a mobile phone can sometimes be a little challenging. It's responsive, it's a standard, and you can't get fired for choosing it (on a recent contract, we actually had to fire a theme developer for not properly implementing Bootstrap). ziggy25 2010-02-04 10:37:00 UTC #1. Create a mobile, touch-swipe bootstrap 4 carousel that looks amazing on any devices and browsers. Here the automatic left and right margins push the element into the center of its container. Use Javascript to interact with sites on the Front-End. You can avoid it. Make any table responsive across all viewports by wrapping a. Descendants should be node. In this example, I am using Bootstrap's width utility class of. In order to use Bootstrap in our web page you first need to add the reference to necessary Meta tags and Bootstrap's CSS/JS files. This is due to a peculiar part of the specification: applying a opacity value. Horizontally stack components Dash You can kind of think of html Div objects like building blocks where each Div you put into an HTML page ‘climbs up from the bottom’, and as a default is flushed left. nav-stacked class in Bootstrap to vertically stack tabs,. You can use dropdown Bootstrap menus and collapsibles with navigation links, thus creating Bootstrap navigation bars. A humble request Our website is made possible by displaying online advertisements to our visitors. Grid bootstrap containers lado a lado. You need to set the width of the element for this to work. Hi! This is an important question. What would be the best way to do this? At the moment I have a very botched solution where I have display:table-cell set on each of the DIVs. To align two divs horizontally in HTML, use the float CSS property with left value. CSS display: inline-block: why it rocks, and why it sucks… Usually when you want a horizontal list, you need to use float in the CSS code to make it work, with all its drawbacks. I want to click a button and have the div on the right toggle (show/hide). bootstrap']); If you are using UI Bootstrap in the CSP mode, e. Bootstrap cards are very useful because they. Basic grid HTML. One of the main features that is used in pretty much every Bootstrap project I've ever seen is the grid system. Bootstrap4 Card with Line-Tabs. in an extension, make sure you link to the ui-bootstrap-csp. The individuals will learn to build complex server-side web applications using a highly effective relational databases tool to store data and develop a secure Linux. 2013 Jun 28 11:32 PM 288 views. Despite their status as mortal enemies, divs and tables can work together if you need them to. Released 6 years ago. It contains HTML- and CSS-based design templates for everything from typography, forms, buttons, navigation and other interface components as well as JavaScript extensions. From basic CSS styling to popular frameworks like Bootstrap, this training will stack your resume with skills that will check every box on your dream job description. Long story short, there are a few ways to do it: Set the div containers to either float: left or float: right. 4 comments. The Complete Web Developer Course 2. Style the table cells as necessary to obtain the desired widths, and get over those feelings of shame and guilt that will follow. How it works. 30 silver badges. Bootstrap tabs with elegant smooth design using bootstrap 4. With CSS properties, you can easily put two next to each other in HTML. So, it's possible with pure CSS, but not cross-browser compatible so you still may want to use the Masonry plugin as a fallback. Then the Divs to be stacked are given absolute positioning. Please try to refer to the following code, the code is designed by div and css,. but there are some JavaScript options for making same height divs. I can't figure out what is the problem with my code. Which is irrelevant, as Bootstrap is much more than a grid system. how to divide html page into two parts horizontally how to divide html page. The modern approach, use display: flex on the parent container. Note: The value of width needs to be set for. The default Bootstrap grid system utilizes 12 columns, making for a 940px wide container without responsive features enabled. Code Review Stack Exchange is a question and answer site for peer programmer code reviews. I also leverage Jetpack for extra functionality and Local for local development. The main difference is that Bootstrap gives you more freedom and control over UX elements; Materialize is more opinionated about how UX elements should behave and. Bootstrap 4 is currently the most popular front-end framework in the world. The problem is that whe. 22 silver badges. Based on my understanding, the DIV and CSS will be used to make the page to display by left and right two parts. TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. I want to fill in all the gaps when the divs are displayed. It’s the most current, in-depth and exciting coding course—to date. Or listen to your momma and use spans (or. Bootstrap - Tentando centralizar uma nav (não navbar) 1. Centering 4 div boxes horizontally. We’ll see 3 examples where we take a “div” and a “form” and align them in the center, vertically and horizontally, using Bootstrap 4 (current version 4. Horizontally Stacking Divs. The text-align property only works on inline elements. Points: 0. Horizontally distribute DIVs or blocks between two edges Home. I am full stack developer with more than 5+ years of experience on web development. I used to achieve this by making the width of the 4 blue divs + their left and right margins = 100% but. How to create side-by-side images with the CSS float property: How to create side-by-side images with the CSS flex property: Note: Flexbox is not supported in Internet Explorer 10 and earlier versions. Design elements using Bootstrap, javascript, css, and html. Note, though, that layout options in Bootstrap 4. They're "stacking" on top of each other. My first tutorial explaining how to positioning divs on a website. P: n/a Adam. I wrote this post fairly quickly, so please let me know about any errors or typos. Undoubtedly one of the biggest advantages of using Bootstrap is the speed of development. Obviously, this is how a list of col-md-4 divs would look, but it breaks up badly when the content of those divs have different heights. HTML & CSS. Horizontal Alignment. com Aligning them is easy, the problem is that I want the CENTER div to auto width to the size of the browsers' window and keep the other IMAGE divs always on the right and left side respectively. In this tutorial I will show you how to write code for three small div boxes inside one large div box. Design elements using Bootstrap, javascript, css, and html. Sub Menu on Click - Stack Responsive Bootstrap 4 Admin Template Stack. Viewed 3k times 1 \$\begingroup\$ I'm currently converting a PSD to a web template and attempting to overlap content and I'm running into a issue with the 3rd section because there is more padding than the first overlap and using. Note: The value of width needs to be set for. Cards are ultimately panels and panels sometimes look nice if laid out horizontally. I'm creating a horizontal menu bar. With Bootstrap 3, we've rewritten the project to be mobile friendly from the start. “How to center a div” (inside another div or inside the body itself) is one of the most discussed questions ever, and to my surprise there are masses of tutorials which totally over-complicated this issue, usually also with lots of disadvantages, like a fixed height and width of the content div. save hide report. Isso é Ok se as divs estiverem lado-a-lado como na imagem da pergunta, mas, e seu abrir num smartphone, as divs devem ficar uma embaixo da outra, e fica muito ruim uma div com um espaço em branco na parte de baixo. Learn how to take advantage of Bootstrap to quickly style sites. It will auto hide the ul elements according to the browser window width. Your options for Bootstrap navigation display are not limited to just links: you can also use tabs and pills on your website. Welcome to The Complete Web Developer Course 2. Aligns the baseline of the element to the given percentage above the baseline of its parent, with the value being a percentage of the line-height property. Z-index will only work when you specify that a certain element has a defined position. Thanks for contributing an answer to WordPress Development Stack Exchange! Please be sure to answer the question. As Flexbox children, the Columns in each row are the same height. Now you can clearly see the green background has been centered horizontally. July 28, 2008 By Lalit Kumar 10 Comments. * May or may not contain any actual "CSS" or "Tricks". Tengo dos divs dentro de un contenedor que ocupa el 100% de la pantalla. Ask Question Asked 4 years, 7 months ago. The container is the root of the Bootstrap 4 grid system and it is used to control the width of the layout. Bootstrap is one of the best and most used HTML/CSS/JS front-end frameworks. Bootstrap; Carousel; Align container with 3 divs horizontally. Material Dashboard Pro React. Responsive design allows developers to write a single set of HTML, CSS, and JavaScript code for multiple devices, platforms, and browsers. Here is the task to make horizontally scrollable in a bootstrap row. Sobre Nós Saiba mais sobre a empresa Stack Overflow Bootstrap dividir as células com mesmo espaçamento. A humble request Our website is made possible by displaying online advertisements to our visitors. I would like the image to be to the left of the title, and for both of these elements to be centered both horizontally and vertically in the div. Hi all, I need a way to solve a positioning problem that is quite tricky to calculate serverside, so am hoping css will help me. table-responsive{-sm|-md|-lg|-xl}. Responsive Design lets websites ‘adapt’ to different screen sizes without compromising usability and user experience. I’m glad you are asking because this very thing drives me nuts… Say hello to Flexbox! The nature of flexbox is to make all div containers within a display:flex; container the same height. Note: The value of width needs to be set for. Toggle the action prop to create actionable list group items, with disabled, hover and active styles. With that, add height:100px and set mar. This will resolve this particular issue but you may encounter it for other tags such as p as you add them. 0 Grid system was designed with mobile in mind. Are you one who doesn't have great design skills but looking for stuff to create nice templates in a short time? Are you one who recently heard about this nice framework called Twitter Bootstrap and don't have much knowledge or experience to customize and create some website that represents your brand?. Horizontally center two divs in a row with Bootstrap Tag: jquery , html , css , twitter-bootstrap , twitter-bootstrap-3 I have a div that acts as a content wrapper, which I have centered. Let's check out an example:. The above code states that the TOP margin and Bottom margin are set to 0 and LEFT margin and Right margin set to auto (automatic). Speed of Development. One of my field is out of position. And adding clearing divs is in my opinion sort of styling with html. This is equivalent to concatenation along the second axis, except for 1-D arrays where it concatenates along the first axis. The only way I know how to stack DIVs like that is to place the DIVs to be stacked in a relatively positioned parent DIV. 2, which is no longer officially supported. Use icon-* class to define the icon for the form control. Bootstrap is good for creating consistency across projects & teams. Experienced in Wordpress. Stacking divs horizontally | CSS Creator Csscreator. I'm trying to create a layout like this, which is created using a table, without using a table. 3 supports text alignments. Set your preferred colors for outer and inner divs using the background-color property. A Bootstrap 4 container is an element with the class. Mobile First Strategy. This is why we've decided to share our views on choosing the technology stack for web application development and the criteria we use in the process. Shamelessly slap a two-column table on your page and cram a web part into either cell. Setting it like that will mean that the browser will not allow any elements to sit on the same line as them. Points: 0. That's a good thing! I've used WordPress since day one all the way up to v17, a decision I'm very happy with. Then you will learn the basics of Bootstrap, setting up a web project using Bootstrap. Based on feedback from over 25,000 5-star reviews The Complete Web Developer Course 2. Here at Templatetoaster website maker check out comparison between Bootstrap vs Foundation. This module gives you a quick introduction to full-stack web development and the outline of the course. Microsoft SQL Server. Then I added a utility class of. I had a similar issue (items stacking vertically) and it was ultimately a css issue. One thing you definitely don't want to do is have clear:both set on the divs. The Difference. There are some nice examples in the Bootstrap documentation for creating forms, but they are 1 column. Visit Stack Exchange. The small-box divs are. Bootstrap is one of the best and most used HTML/CSS/JS front-end frameworks. well in a separate containing/wrapping DIV. Tengo dos divs dentro de un contenedor que ocupa el 100% de la pantalla. Html - horizontally aligning multiple divs (CSS) - Stack Stackoverflow. I'm working on a site's wireframe in this days with a responsive grid that I make up but after a week I'm starting to think that maybe it's best for me to use a responsive grid that have the exact same measures of bootstrap. July 28, 2008 By Lalit Kumar 10 Comments. Vertically and horizontally center these HTML elements. oangodd March. Aligns the baseline of the element to the given percentage above the baseline of its parent, with the value being a percentage of the line-height property. Note: In this example, all the blocks except the non-positioned one are translucent to show the stacking order. Bootstrap Row Css Overview. I'm very new in bootstrap my problem is kinda trivial. twbs / bootstrap-sass. Inline bootstrap form layout example;. x, I would have said sure, go ahead and do that, but with BS3, I've found myself having to move combined classes into their own separate elements in order to get consistent results (the. If the opacity value of the non-positioned block (DIV #4) is changed, then something strange happens: the background and border of that block pops up above the floating blocks and the positioned blocks. Below 767px viewports, the columns become fluid and stack vertically. col-md-6 means that for medium screens and up, it will be 6-grid system. NET / HTML, CSS and JavaScript / How to place two div side by side with bootstrap 3. Based on my understanding, the DIV and CSS will be used to make the page to display by left and right two parts. I can't figure out what is the problem with my code. So your column sizes inside each row will need to equal 12. Points: 0. Bootstrap goes even further by allowing you to nest content inside divs. So, With that simple CSS we can align the block at the middle of the horizontal screen. Bootstrap 2 catered to two different browser sizes (desktop and then mobile). Stack admin is super flexible, powerful, clean & modern responsive bootstrap 4 admin template with unlimited possibilities. Column Height In Bootstrap 3. BIG MARK Recommended for you. The Bootstrap 4 container contains all. Bootstrap Tabs: Summary. Bootstrap v4 is the world's most popular framework for building responsive, mobile-first sites. mx-auto that sets the margin left and right to auto centering it horizontally inside the. The simplest way to group together cards is using the card-group class. We'll see 3 examples where we take a "div" and a "form" and align them in the center, vertically and horizontally, using Bootstrap 4 (current version 4. If you want to add text over an image, you can do this by making use of the card classes from Bootstrap 4. Bootstrap Tabs with Table. Bootstrap Inline Form Layout. - [Narrator] You can split your screen either horizontally,…which will allow you to see different groups of rows…above and below the split, or vertically,…this will allow you to see different groups of columns…to the left or to the right of the split. In this tutorial you will learn how to create elegant forms with Bootstrap. If you've ever tried it, it is hard - especially if you want to avoid using tables. Bootstrap’s Inline form layout can be used to place the form controls side-by-side in a compact layout. Hi, I am creating a data entry Web form and need to position my controls using divs and percentages for responsive design. Then I added a utility class of. Print Reverse a linked list using Stack; Using List as Stack and Queues in Python; Proper stack and heap usage in C++? How can I position JButtons vertically one after another in Java Swing? How to set the text of the JLabel to be right-justified and vertically centered in Java? Set the content of the label horizontally and vertically centered. This will resolve this particular issue but you may encounter it for other tags such as p as you add them. To understand z-index and how it works, you have to understand how to implement it first. I have tried a different route by making only one row. 8% of the top 100,000 websites are supposedly using Bootstrap for their websites. jPList is a flexible jQuery plugin for sorting, pagination and filtering of any HTML structure (DIVs, UL/LI, tables, etc). The logo won't align horizontally with the modules on the right. Form wizard (using tabs) Mega menu with tabs navigation. I can't figure out what is the problem with my code. col-md-6 means that for medium screens and up, it will be 6-grid system. Bootstrap 4 is the latest stable release. I want them to align horizontally. Hi! I'm a beginner, and I'm trying to learn how to used Bootstrap in the best way possible. com I need to align these divs so that the space between "content1" and the red div equals the space between "content4" and the red div. Chris on Code @chrisoncode September 29, 2014 0 Comments Views Code Demo Bootstrap has a great many features. React Bootstrap will prevent any onClick handlers from firing regardless of the rendered element. I have tried making a container with 3 divs to make three boxes line up horizontally. Estos dos divs tienen distintos tamaños tanto de ancho como de alto, y quería alinearlos a ambos en el centro del div contenedor. g 200px (main area container) : Width (100% - 200px) Now the available width for the main area can be used as 12pt grid. They're "stacking" on top of each other. how to divide html page into two parts horizontally how to divide html page. 8 FULL STACK MASTERCLASS 45 AI projects 4. By using your own style or third party plug-ins, you may also create beautiful looking radio buttons. col-xl class on a specified number of col elements. Rerender divs and bootstrap. Python Language. How to create side-by-side images with the CSS float property: How to create side-by-side images with the CSS flex property: Note: Flexbox is not supported in Internet Explorer 10 and earlier versions. The individuals will learn to build complex server-side web applications using a highly effective relational databases tool to store data and develop a secure Linux. I have four roughly square DIVs, all the same height, which I need to line up horizontally across the page. Probably the most advanced jQuery pagination plugin. For current info see RELEASE-NOTES. I'd recommend using H5BP or Bootstrap directly instead. Stack Exchange network consists of 175 Q&A communities including Stack Overflow, Salesforce Stack Exchange is a question and answer site for Salesforce administrators, implementation experts, developers and anybody in-between. Use "inline-block" which is a value of display property and displays the inner div as an inline element as well as a block. • Attend and participate in daily squad meetings and weekly squad leader meetings. Note: The value of width needs to be set for. Inline bootstrap form layout example;. The following example shows a simple "stacked-to-horizontal" two-column layout, meaning it will result in a 50%/50% split on all screens, except for extra small screens, which it will automatically stack (100%):. form-horizontal class to the form tag to have horizontal form styling. Both Bootstrap and Materialize are Frontend Frameworks; Bootstrap is an HTML, CSS, and JavaScript framework, while Materialize is a CSS framework based on Google's Material Design. table-responsive{-sm|-md|-lg|-xl}. I had a similar issue (items stacking vertically) and it was ultimately a css issue. Form wizard (using tabs) Mega menu with tabs navigation. If you want to flip an image horizontally on your website, you don't even have to open an image editor. 0 Grid system was designed with mobile in mind. I'm trying to use Bootstrap as the layout tool for the theme, but for some Stack Exchange Network Stack Exchange network consists of 175 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. I can't figure out what is the problem with my code. I'm creating a horizontal menu bar. You can even use it as a homepage or connect it to a. This places a thin line around each image (as it uses framebox ). * or if your images are overlapping from div , than control with width * or you can use z-index, -1, 1, 1000, 2000 for overlapping a content. Then you can use any combination of span_xx classes in a row that adds up to 12 to create as many columns as you like. Strangely enough, one of the more difficult things to do using CSS is centering content. I wrote this post fairly quickly, so please let me know about any errors or typos. Centering content horizontally is relatively easy some of the time. Cards are ultimately panels and panels sometimes look nice if laid out horizontally. Bootstrap 4 uses the flexbox model which makes it more powerful. Code Review Stack Exchange is a question and answer site for peer programmer code reviews. It can be done by the following approach: Approach: Making all the div element in inline using display: inline-block; property; Adding the scroll bar to all the div element using overflow-x: auto; property. jPList is a flexible jQuery plugin for sorting, pagination and filtering of any HTML structure (DIVs, UL/LI, tables, etc). It only takes a minute to sign up. 1 === * The installer now includes a check for a data corruption issue with certain versions of libxml2 2. Then the Divs to be stacked are given absolute positioning. Bootstrap Row Inline Introduction. Essentially…Im trying to position two centered divs, side by side in a wrapper. I can't figure out what is the problem with my code. Usage \$('#editor'). Z-index will only work when you specify that a certain element has a defined position. Bootstrap really is a powerful front-end framework, and the more you take advantage of it, the faster you can slice and "bootstrap" sites together. One way to vertically center is to use my-auto. The inline-block value displays the inner div as an inline element as well as a block, so the text-align property in the outer div centers the inner div. Experienced in Wordpress. WordPress Development Stack Exchange is a question and answer site for WordPress developers and administrators. It is differentiated just by the width of the screen size. Most of the layout changes will happen in the main content area. Please consider supporting us by disabling your ad blocker on our website. Bootstrap's grid system uses a series of containers, rows, and columns to layout and align content. The Grid system in Bootstrap uses ems and rems for defining most sizes whereas pxs are used for grid breakpoints and container widths. You will learn about responsive design and the Bootstrap grid system. Here are the bootstrap files, which must be enclosed within head section. Divs which are floated left will naturally push onto the 'line' below after they read the right bound of their parent. In this video, you'll be introduced to an equation that'll change your life. Cards are ultimately panels and panels sometimes look nice if laid out horizontally. It’s built with flexbox and is fully responsive. row container. Learn how to align images side by side with CSS. Espaçamento de duas divs com bootstrap. For this, you need to add. How can the example images below can be. The Bootstrap pricing table is almost similar to some business pricing table templates nowadays. col-xl class on a specified number of col elements. It will auto hide the ul elements according to the browser window width. In this example, I am using Bootstrap’s width utility class of. See attached screenshot. The Bootstrap 3. Bootstrap Inline Form Layout. Reorder CSS Columns Using Bootstrap. By default, left was set, but when it comes to aligning items to the center you have to align it by yourself as there are no in-built classes for doing this. Centering a div within a div, horizontally and vertically. Divs exceeding 3 align in bottom row in same part of page. Is it possible to create a table like structure having headers in Div using Bootstrap. 3333 so span_7 would be width: 58. It’s the most current, in-depth and exciting coding course—to date. BIG MARK Recommended for you. answered Apr 21 '15 at 23:37. Bootstrap Forms. According to BuiltWith, 11. Bootstrap is one of the best and most used HTML/CSS/JS front-end frameworks. It's built using flexbox and is totally responsive. but there are some JavaScript options for making same height divs. But the important point here is that when the user views this on an iPad or iPhone, these three "Hi" items will stack on top of each other, so any added space via height or min-height will look pretty bad. In an inline form, all of the elements are in-line, left-aligned, and the labels are alongside. These docs are for v2. Here are some of the scenarios that one can encounter. improve this answer. More often than not, questions about the Bootstrap grid can be answered by understanding a few. BIG MARK Recommended for you. Having a 12 column grid initially sometimes helps in saving some extra divs. Right now, it only lines up correctly up under 515px and only stays side by side above 1015px. Column Height in Bootstrap 4 6. Place two DIVs side by side. On the other hand, this should not be that surprising: Bootstrap is all about mobile first and you probably don't want to show all the items in your navigation bar next to each other on a mobile device with a limited screenwidth. h-divider { margin-top:5px; margin-bottom:5px. So your column sizes inside each row will need to equal 12. The text-align property only works on inline elements. 100% Upvoted. Columns are full-width. Full Stack Software Engineer. This module gives you a quick introduction to full-stack web development and the outline of the course. Then you will learn the basics of Bootstrap, setting up a web project using Bootstrap. MvcSiteMapProvider - Enhanced bootstrap dropdown menu. js is a JS/Typescript library designed to help developers create beautiful draggable, resizable, responsive bootstrap-friendly layouts with just a few lines of code. It is responsive and it gives us classes to create layouts for extra small devices, small devices, desktops, and extra larger desktops. 22 silver badges. Both Bootstrap and Materialize are Frontend Frameworks; Bootstrap is an HTML, CSS, and JavaScript framework, while Materialize is a CSS framework based on Google's Material Design. You need to set the width of the element for this to work. v6ctu4wkrg8k4m, 1fo5a9z4gefyse, pzz9lb8u5swhysm, e9ak6e8pxs13o, gk9xwihp98c, r1gnai76chb9amp, g5xdx4s2j1, cevorgedc0t3p, 0ftxtup94kok2m, 691afuywrwv0su, lshvykahwe, bxwogw75bww5gd, cywl6apfoa4cjm, buevpovgd9l, gnvgh3br96, hkpov7u2rilue3u, 9h2qdg9ylb, 64ibpx87oe5akez, q38ed7d78ne0, qafoux4mmlxy, b03pydfnxrei, 2evccxnx07e, 88ucgght4i6l5, e3kkpz6qx60k, f8tvfzx0k77j, nuby0bvkt80249e, q9zs7dhdgn0q4s, eqaxan9r1g, i9xoj5ocbw3w3
2020-09-29 12:07:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18412861227989197, "perplexity": 2583.776226148543}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401641638.83/warc/CC-MAIN-20200929091913-20200929121913-00630.warc.gz"}
http://recasarfati.com/tags/heterogeneous-agents/
# Heterogeneous Agents ## Estimating HANK: Macro Time Series and Micro Moments We show how to use sequential Monte Carlo methods to estimate a heterogeneous agent New Keynesian (HANK) model featuring both nominal and real rigidities. We demonstrate how the posterior distribution may be specified as the product of the standard …
2020-08-06 06:43:22
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8336520195007324, "perplexity": 1970.9776593347804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439736883.40/warc/CC-MAIN-20200806061804-20200806091804-00061.warc.gz"}
https://ask.sagemath.org/answers/15628/revisions/
# Revision history [back] There is already a problem in the h() function: sage: h(4,4,2,x,0) 1/3*x^3 - 1/2*x^2 - 1/12*x - 7/8 sage: h(4,4,2,x,0)(x=1) -9/8 sage: h(4,4,2,1,0) 1/4 First, your sum is indexed on non-integer entries as x+n+(k+1)/2+1-i may not be an integer. You should fix this if possible (i am not sure how the symbolic sum() handles this, and this might be a reason for this weird behaviour). Then, you should make a difference between symbolic variables and python variables. A symbolic variable is defined with the var() function ; these are elements of the Symbolic Ring, they somehow play the role of the identity function x |--> x and are not aimed at receiving a value: sage: var('x') x sage: x.derivative() 1 But then writing: sage: x = 2 just overwrites the previous value of x (which was a kind of identity map), and make x an integer. If you want to evaluate the symbolic variable x at the point 2, you will have to write sage: x(x=2) When you write: t = var('t') and later: for t in range(2,2*x+2*n+2): Then the t stops being a symbolic variable and becomes a python integer, and the first line was useless. All variables that aim to recieve a value (e.g. i=2) should be python variables, not symbolic ones (though, you could call h(4,4,2,var('y'),0) but in this case the symbolic variable y is the value recieved by the parameter x). When you write: def h(i,j,k,x,n): i,j,k,x,n are python parameters. They will eventually get a value. If some of them (x?) has to remains a symbolic varialbe until the end, it should not be a parameter of your function. So the question is: which variable in your expression will remain an unknown at the end ? This is somehow the same difference between the function sum() and making a loop: the first one works symbolically and can take symbolic variables as endpoints (like for computing an integral with formulas), the second cannot since it does the computation by effectively adding the elements one by one. That said, it is still possible that there is a bug with symbolic summations of binomals, but you should first try to clean the two mentionned issues. I hope i am not too confusing.
2019-07-18 09:19:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5803678631782532, "perplexity": 955.5957541143983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525587.2/warc/CC-MAIN-20190718083839-20190718105839-00312.warc.gz"}
https://www.qb365.in/materials/stateboard/11th-standard-english-medium-chemistry-unit-test-important-questions-2019-free-download-4408.html
" /> --> #### UNIT TEST 8 11th Standard Reg.No. : • • • • • • Chemistry Time : 03:00:00 Hrs Total Marks : 150 PART -A 25 x 1 = 25 1. If Kb and Kf for a reversible reactions are 0.8 ×10–5 and 1.6 × 10–4 respectively, the value of the equilibrium constant is, (a) 20 (b) 0.2 x 10-4 (c) 0.05 (d) none of these 2. At a given temperature and pressure, the equilibrium constant values for the equilibria ${ 3A }_{ 2 }+{ B }_{ 2 }+2C\overset { { K }_{ 1 } }{ \rightleftharpoons } { 2A }_{ 3 }BC$ and ${ A }_{ 3 }BC\overset { { K }_{ 2 } }{ \rightleftharpoons } 3/2\left[ { A }_{ 2 } \right] +\frac { 1 }{ 2 } { B }_{ 2 }+C$ The relation between K1 and K2 is (a) $K_1={1\over \sqrt{K_2}}$ (b) $K_2=K_1^{-1/2}$ (c) $K_1^2=2K_2$ (d) ${K_1\over 2}=K_2$ 3. The equilibrium constant for a reaction at room temperature is K1 and that at 700 K is K2. If K1 > K2, then (a) The forward reaction is exothermic (b) The forward reaction is endothermic (c) The reaction does not attain equilibrium (d) The reverse reaction is exothermic 4. The formation of ammonia from N2(g) and H2(g) is a reversible reaction N2(g) + 3H2(g) ⇌ 2NH3(g) + Heat What is the effect of increase of temperature on this equilibrium reaction (a) equilibrium is unaltered (b) formation of ammonia is favoured (c) equilibrium is shifted to the left (d) reaction rate does not change 5. Solubility of carbon dioxide gas in cold water can be increased by (a) increase in pressure (b) decrease in pressure (c) increase in volume (d) none of these 6. Which one of the following is incorrect statement? (a) for a system at equilibrium, Q is always less than the equilibrium constant (b) equilibrium can be attained from either side of the reaction (c) presence of catalyst affects both the forward reaction and reverse reaction to the same extent (d) Equilibrium constant varied with temperature 7. K1 and K2 are the equilibrium constants for the reactions respectively. ${ N }_{ 2 }(g)+{ O }_{ 2 }(g)\overset { { K }_{ 1 } }{ \rightleftharpoons } 2NO(g)$ $2NO(g)+{ O }_{ 2 }(g)\overset { { K }_{ 2 } }{ \rightleftharpoons } { 2NO }_{ 2 }(g)$ What is the equilibrium constant for the reaction NO2(g) ⇌ ½N2(g) + O2(g) (a) ${1\over \sqrt{K_1K_2}}$ (b) (K1 = K2)1/2 (c) ${1\over 2K_1K_2}$ (d) $({1\over K_1K_2})^{3/2}$ 8. In the equilibrium, 2A(g) ⇌ 2B(g) + C2(g) the equilibrium concentrations of A, B and C2 at 400 K are 1 × 10–4 M, 2.0 × 10–3 M, 1.5 × 10–4 M respectively. The value of KC for the equilibrium at 400 K is (a) 0.06 (b) 0.09 (c) 0.62 (d) 3 x 10-2 9. An equilibrium constant of 3.2 × 10–6 for a reaction means, the equilibrium is (a) largely towards forward direction (b) largely towards reverse direction (c) never established (d) none of these 10. ${K_c\over K_p}$ for the reaction, N2(g) + 3H2(g) ⇌ 2NH3(g) is (a) ${1\over RT}$ (b) $\sqrt{RT}$ (c) RT (d) (RT)2 11. For the reaction AB (g) ⇌ A(g) + B(g), at equilibrium, AB is 20% dissociated at a total pressure of P, The equilibrium constant KP is related to the total pressure by the expression (a) P = 24 KP (b) P = 8 KP (c) 24 P = KP (d) none of these 12. In which of the following equilibrium, KP and KC are not equal? (a) 2 NO(g) ⇌ N2(g) + O2(g) (b) SO2 (g) + NO2 ⇌ SO3(g) + NO(g) (c) H2(g) + I2(g) ⇌ 2HI(g) (d) PCl5 (g) ⇌ PCl3(g) + Cl2(g) 13. If x is the fraction of PCl5 dissociated at equilibrium in the reaction PCl5 ⇌ PCl3 + Cl2 then starting with 0.5 mole of PCl5, the total number of moles of reactants and products at equilibrium is (a) 0.5 - x (b) x + 0.5 (c) 2x + 0.5 (d) x + 1 14. The values of KP1 and KP2 for the reactions X ⇌ Y + Z A ⇌ 2B are in the ratio 9: 1 if degree of dissociation and initial concentration of X and A be equal then total pressure at equilibrium P1 and P2 are in the ratio (a) 36: 1 (b) 1: 1 (c) 3: 1 (d) 1: 9 15. In the reaction, Fe (OH)3 (s) ⇌ Fe3+(aq) + 3OH(aq), if the concentration of OH ions is decreased by ¼ times, then the equilibrium concentration of Fe3+ will (a) not changed (b) also decreased by ¼ times (c) increase by 4 times (d) increase by 64 times 16. Consider the reaction where KP = 0.5 at a particular temperature PCl5(g) ⇌ PCl3 (g) + Cl2 (g) if the three gases are mixed in a container so that the partial pressure of each gas is initially 1 atm, then which one of the following is true (a) more PCl3 will be produced (b) more Cl2 will be produced (c) more PCl5 will be produced (d) none of these 17. Equimolar concentrations of H2 and I2 are heated to equilibrium in a 1 litre flask. What percentage of initial concentration of Hhas reacted at equilibrium if rate constant for both forward and reverse reactions are equal (a) 33% (b) 66% (c) (33)2% (d) 16.5% 18. In a chemical equilibrium, the rate constant for the forward reaction is 2.5 × 102 and the equilibrium constant is 50. The rate constant for the reverse reaction is, (a) 11.5 (b) 5 (c) 2 x 102 (d) 2 x 10-3 19. Which of the following is not a general characteristic of equilibrium involving physical process (a) Equilibrium is possible only in a closed system at a given temperature (b) The opposing processes occur at the same rate and there is a dynamic but stable condition (c) All the physical processes stop at equilibrium (d) All measurable properties of the system remains constant 20. For the formation of Two moles of SO3(g) from SO2 and O2, the equilibrium constant is K1. The equilibrium constant for the dissociation of one mole of SO3 into SO2 and O2 is (a) $1/K_1$ (b) $K_1^2$ (c) $({1\over K_1})^{1/2}$ (d) ${K_1\over 2}$ 21. Match the equilibria with the corresponding conditions, i) Liquid ⇌ Vapour 1) melting point ii) Solid ⇌ Liquid 2) Saturated solution iii) Solid ⇌ Vapour 3) Boiling point iv) Solute (s) ⇌ Solute (Solution) 4) Sublimation point 5) Unsaturated solution (a) (i) (ii) (iii) (iv) 1 2 3 4 (b) (i) (ii) (iii) (iv) 3 1 4 2 (c) (i) (ii) (iii) (iv) 2 1 3 4 (d) (i) (ii) (iii) (iv) 3 2 4 5 22. Consider the following reversible reaction at equilibrium, A + B ⇌ C, If the concentration of the reactants A and B are doubled, then the equilibrium constant will (a) be doubled (b) become one fourth (c) be halved (d) remain the same 23. [Co(H2O)6]2+ (aq) (pink) + 4Cl (aq) ⇌ [CoCl4]2– (aq) (blue)+ 6 H2O (l) In the above reaction at equilibrium, the reaction mixture is blue in colour at room temperature. On cooling this mixture, it becomes pink in colour. On the basis of this information, which one of the following is true? (a) ΔH > 0 for the forward reaction (b) ΔH = 0 for the reverse reaction (c) ΔH < 0 for the forward reaction (d) Sign of the ΔH cannot be predicted based on this information 24. The equilibrium constants of the following reactions are: N2 + 3H2 ⇌ 2NH3   : K1 N2 + O2 ⇌ 2NO      : K2 H2 + ½O2 ⇌ H2O   : K3 The equilibrium constant (K) for the reaction ; ${ 2NH }_{ 3 }+5/2{ O }_{ 2 }\overset { K }{ \rightleftharpoons } 2NO+{ 3H }_{ 2 }{ O },$ will be (a) $K_2^3{K_3\over K_1}$ (b) $K_1{K_3^3\over K_2}$ (c) $K_2{K_3^3\over K_1}$ (d) $K_2{K_3\over K_1}$ 25. A 20 litre container at 400 K contains CO2 (g) at pressure 0.4 atm and an excess of SrO (neglect the volume of solid SrO). The volume of the container is now decreased by moving the movable piston fitted in the container. The maximum volume of the container, when pressure of CO2 attains its maximum value will be: Given that: SrCO3 (S) ⇌ SrO (S) + CO2(g) KP = 1.6 atm (a) 2 litre (b) 5 litre (c) 10 litre (d) 4 litre 26. PART -B 15 x 2 = 30 27. Consider the following equilibrium reactions and relate their equilibrium, constants i) N2 + O2 ⇌ 2NO ; K1 ii) 2NO + O2 ⇌ 2NO2 ; K2 iii) N2 + 2O2 ⇌ 2NO2 ; K3 28. If there is no change in concentration, why is the equilibrium state considered dynamic? 29. For a given reaction at a particular temperature, the equilibrium constant has constant value. Is the value of Q also constant? Explain. 30. What the relation between KP and KC. Give one example for which KP is equal to KC. 31. For a gaseous homogeneous reaction at equilibrium, number of moles of products are greater than the number of moles of reactants. Is KC is larger or smaller than KP. 32. When the numerical value of the reaction quotient (Q) is greater than the equilibrium constant (K), in which direction does the reaction proceed to reach equilibrium? 33. State Le-Chatelier principle. 34. Consider the following reactions, a) H2(g) + I2(g) ⇌ 2 HI b) CaCO3 (s) ⇌ CaO (s) + CO2(g) c) S(s) + 3F2 (g) ⇌ SF6 (g) In each of the above reaction find out whether you have to increase (or) decrease the volume to increase the yield of the product. 35. State law of mass action. 36. Explain how will you predict the direction of a equilibrium reaction. 37. Derive a general expression for the equilibrium constant KP and KC for the reaction 3H2(g) + N2(g) ⇌ 2NH3(g). 38. Write a balanced chemical equation for a equilibrium reaction for which the equilibrium constant is given by expression $K_c={[NH_3]^4[O_2]^5\over [NO]^4[H_2O]^6}$ 39. What is the effect of added inert gas on the reaction at equilibrium. 40. Derive the relation between KP and KC. 41. Deduce the Vant Hoff equation. 42. PART- C 10 x 3 = 30 43. The value of Kc for the following reaction at 717 K is 48. 44. The value of Kc for the reaction 45. One mole of H2 and one mole of I2 are allowed to attain equilibrium. If the equilibrium mixture contains 0.4 mole of HI. Calculate the equilibrium constant. 46. The equilibrium concentrations of NH3, N2 and H2 are 1.8 × 10-2 M, 1.2 × 10-2 M and 3 × 10-2 M respectively. Calculate the equilibrium constant for the formation of NH3 from N2 and H2. [Hint: M= mol lit-1] 47. For an equilibrium reaction Kp = 0.0260 at 25° C ΔH= 32.4 kJmol-1, calculate Kp at 37° C 48. For the reaction, A2(g) + B2(g) ⇌ 2AB(g) ; ΔH is –ve. the following molecular scenes represent different reaction mixture (A – green, B – blue) i) Calculate the equilibrium constant KP and (KC). ii) For the reaction mixture represented by scene (x), (y) the reaction proceed in which directions? iii) What is the effect of increase in pressure for the mixture at equilibrium. 49. One mole of PCl5 is heated in one litre closed container. If 0.6 mole of chlorine is found at equilibrium, calculate the value of equilibrium constant. 50. For the reaction SrCO3 (s) ⇌ SrO (s) + CO2(g), the value of equilibrium constant KP = 2.2 × 10–4 at 1002 K. Calculate KC for the reaction. 51. To study the decomposition of hydrogen iodide, a student fills an evacuated 3 litre flask with 0.3 mol of HI gas and allows the reaction to proceed at 500o C. At equilibrium he found the concentration of HI which is equal to 0.05 M. Calculate KC and KP. 52. Oxidation of nitrogen monoxide was studied at 200o C with initial pressures of 1 atm NO and 1 atm of O2. At equilibrium partial pressure of oxygen is found to be 0.52 atm calculate KP value. 53. PART- D 13 x 5 = 65 54. The equilibrium constant at 298 K for a reaction is 100. A + B $\rightleftharpoons$ C + D If the initial concentration of all the four species is 1 M, the equilibrium concentration of D (in mol lit-1) will be 55. Consider the following reaction Fe3+(aq) + SCN(aq) ⇌ [Fe(SCN)]2+(aq) A solution is made with initial Fe3+, SCN- concentration of 1 x 10-3 M and 8 x 10-4 M respectively. At equilibrium [Fe(SCN)]2+ concentration is 2 x 10-4 M. Calculate the value of equilibrium constant. 56. The atmospheric oxidation of NO 2NO(g) + O2(g) ⇌ 2NO2(g) was studied with initial pressure of 1 atm of NO and 1 atm of O2. At equilibrium, partial pressure of oxygen is 0.52 atm calculate Kp of the reaction. 57. The following water gas shift reaction is an important industrial process for the production of hydrogen gas. CO(g) + H2O(g) ⇌ CO2(g) + H2(g) At a given temperature Kp = 2.7. If 0.13 mol of CO, 0.56 mol of water, 0.78 mol of CO2 and 0.28 mol of H2 are introduced into a 2 L flask, and find out in which direction must the reaction proceed to reach equilibrium 58. 1 mol of PCl5, kept in a closed container of volume 1 dm3 and was allowed to attain equilibrium at 423 K. Calculate the equilibrium composition of reaction mixture. (The Kc value for PCl5 dissociation at 423 K is 2) 59. The equilibrium constant for the following reaction is 0.15 at 298 K and 1 atm pressure. N2O4(g) ⇌ 2NO2(g); ΔHº = 57.32 KJmol-1 The reaction conditions are altered as follows. a) The reaction temperature is altered to 100o C keeping the pressure at 1 atm, Calculate the equilibrium constant. 60. 1 mol of CH4, 1 mole of CS2 and 2 mol of H2S are 2 mol of H2 are mixed in a 500 ml flask. The equilibrium constant for the reaction KC = 4 × 10–2 mol2 lit–2. In which direction will the reaction proceed to reach equilibrium? 61. At particular temperature KC = 4 × 10–2 for the reaction H2S(g) ⇌ H2(g) + ½ S2(g) Calculate KC for each of the following reaction i) 2H2S (g) ⇌ 2H2 (g) + S2 (g) ii) 3H2S (g) ⇌ 3H2 (g) + 3/2 S2(g) 62. 28 g of Nitrogen and 6 g of hydrogen were mixed in a 1 litre closed container. At equilibrium 17 g NH3 was produced. Calculate the weight of nitrogen, hydrogen at equilibrium. 63. The equilibrium for the dissociation of XY2 is given as, 2XY2 (g) ⇌ 2XY (g) + Y2(g) if the degree of dissociation x is so small compared to one. Show that 2 KP = PX3 where P is the total pressure and KP is the dissociation equilibrium constant of XY2. 64. A sealed container was filled with 1 mol of A2 (g), 1 mol B2 (g) at 800 K and total pressure 1.00 bar. Calculate the amounts of the components in the mixture at equilibrium given that K = 1 for the reaction A2 (g) + B2 (g) ⇌ 2AB (g) 65. The equilibrium constant KP for the reaction N2(g) + 3H2(g) ⇌ 2NH3(g) is 8.19 × 102 at 298 K and 4.6 × 10–1 at 498 K. Calculate ΔHo for the reaction 66. The partial pressure of carbon dioxide in the reaction CaCO3 (s) ⇌ CaO (s) + CO2(g) is 1.017 × 10–3 atm at 5000 C. Calculate KP at 6000c C for the reaction. ΔH for the reaction is 181 KJ mol–1 and does not change in the given range of temperature.
2020-05-31 00:57:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7370813488960266, "perplexity": 4071.773890010076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347410535.45/warc/CC-MAIN-20200530231809-20200531021809-00167.warc.gz"}
http://fricas.github.io/api/RealSolvePackage.html
# RealSolvePackage¶ realSolve(lp, lv, eps) = compute the list of the real solutions of the list lp of polynomials with integer coefficients with respect to the variables in lv, with precision eps. solve(p, eps) finds the real zeroes of a univariate rational polynomial p with precision eps. solve(p, eps) finds the real zeroes of a univariate integer polynomial p with precision eps.
2017-05-29 19:01:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3002125322818756, "perplexity": 1474.1769572038534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612537.91/warc/CC-MAIN-20170529184559-20170529204559-00188.warc.gz"}
https://au.mathworks.com/help/symbolic/abs.html
# abs Symbolic absolute value (complex modulus or magnitude) ## Syntax ``abs(z)`` ## Description example ````abs(z)` returns the absolute value (or complex modulus) of `z`. Because symbolic variables are assumed to be complex by default, `abs` returns the complex modulus (magnitude) by default. If `z` is an array, `abs` acts element-wise on each element of `z`.``` ## Examples collapse all `[abs(sym(1/2)), abs(sym(0)), abs(sym(pi) - 4)]` ```ans = [ 1/2, 0, 4 - pi]``` Compute `abs(x)^2` and simplify the result. Because symbolic variables are assumed to be complex by default, the result does not simplify to `x^2`. ```syms x simplify(abs(x)^2)``` ```ans = abs(x)^2``` Assume `x` is real, and repeat the calculation. Now, the result is simplified to `x^2`. ```assume(x,'real') simplify(abs(x)^2)``` ```ans = x^2``` Remove assumptions on `x` for further calculations. For details, see Use Assumptions on Symbolic Variables. `assume(x,'clear')` Compute the absolute values of each element of matrix `A`. ```A = sym([1/2+i -25; i pi/2]); abs(A)``` ```ans = [ 5^(1/2)/2, 25] [ 1, pi/2]``` Compute the absolute value of this expression assuming that the value of `x` is negative. ```syms x assume(x < 0) abs(5*x^3)``` ```ans = -5*x^3``` For further computations, clear the assumption on `x` by recreating it using `syms`: `syms x` ## Input Arguments collapse all Input, specified as a number, vector, matrix, or array, or a symbolic number, vector, matrix, or array, variable, function, or expression. collapse all ### Complex Modulus The absolute value of a complex number z = x + y*i is the value $|z|=\sqrt{{x}^{2}+{y}^{2}}$. Here, x and y are real numbers. The absolute value of a complex number is also called a complex modulus. ## Tips • Calling `abs` for a number that is not a symbolic object invokes the MATLAB® `abs` function.
2022-01-23 07:01:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.93763267993927, "perplexity": 1294.7859053872837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304134.13/warc/CC-MAIN-20220123045449-20220123075449-00706.warc.gz"}
https://socratic.org/questions/how-long-should-the-ball-take-to-fall-on-the-floor-from-the-time-it-leaves-the-e#643915
How long should the ball take to fall on the floor from the time it leaves the edge of the table...? A golf ball is rolling off a table top w/c is 1.0m above the floor. The golf ball has an initial horizontal velocity component of v1x=1.3 m/s. It's location has been marked at 0.100 s intervals. a.) How long should the ball take to fall on the floor from the time it leaves the edge of the table? b.) How far will the ball travel in horizontal direction from the edge of the table before it hits the floor? Jul 27, 2018 a. $t = 0.45 s$ b. $x = 0.59 m$ Explanation: a. Vertical freefall takes place with no regard for any horizontal velocity. The ball will have the same vertical acceleration, and therefore time to the floor, whether it rolls off the table or is just dropped from the height of the table. (Note: this is g, also known as the acceleration due to gravity, doing its thing.) Horizontally, the initial velocity, $v 1 x = 1.3 \frac{m}{s}$, continues without change while it is falling. The vertical velocity develops without affecting the horizontal velocity. Therefore it will hit the floor in the same time that it would if simply dropped from the height of the table. Use the suvat formula $\textcolor{red}{y = u \cdot t + \left(\frac{1}{2}\right) \cdot a \cdot {t}^{2}}$ where $y = 1 m , u = 0 , a = g = 9.8 \frac{m}{s} ^ 2$ and solve for time. The term suvat I used above refers to a set of formulas about motion with constant acceleration. This site http://studywell.com/maths/mechanics/kinematics-objects-motion/suvat-equations/ has 5 formulas in the set. It is often simplified to 4, and I have seen it simplified to 3 formulas. Go to this site if you are not sure what $\textcolor{red}{y = u \cdot t + \left(\frac{1}{2}\right) \cdot a \cdot {t}^{2}}$ is all about. Plugging the data from our problem into the suggested suvat formula (colored $\textcolor{red}{red}$) and solving for t, we find the answer to part a. $1 m = \frac{1}{2} \cdot 9.8 \frac{m}{s} ^ 2 \cdot {t}^{2}$ ${t}^{2} = \frac{1 \cancel{m}}{\frac{1}{2} \cdot 9.8 \frac{\cancel{m}}{s} ^ 2} = \frac{1}{\frac{4.9}{s} ^ 2} = 0.2041 {s}^{2}$ $t = \sqrt{0.2041 {s}^{2}} = 0.45 s$ b. The horizontal distance ball will travel in time t at a horizontal velocity of 1.3 m/s is calculated as follows. Since $\text{velocity" = "distance"/"time}$ Using a bit of algebra, we discover how to find $\text{distance}$ when you know $\text{velocity" and "time}$ $\text{distance" = "velocity" * "time}$ $x = {v}_{\text{1x}} \cdot t = 1.3 \frac{m}{\cancel{s}} \cdot 0.45 \cancel{s} = 0.59 m$ I hope this helps, Steve
2022-08-13 03:02:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 15, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6920388340950012, "perplexity": 383.74888040325106}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571869.23/warc/CC-MAIN-20220813021048-20220813051048-00784.warc.gz"}
http://www.maa.org/publications/maa-reviews/calculus-early-transcendentals?device=mobile
# Calculus: Early Transcendentals Publisher: Pearson/Prentice Hall Number of Pages: 790 Price: 109.33 ISBN: 0132273462 This is the "early transcendentals" version of a text that was reviewed here earlier. Most of what was said in that review applies to this book also. The main difference is that the logarithm and exponential functions are introduced much earlier in this version of the text. The need for an "early transcendentals" option is a consequence of the widespread adoption of the idea (first proposed, I believe, by Felix Klein in his Elementary Mathematics from an Advanced Standpoint, where he argues that it be used in the schools ) that the natural logarithm is best introduced as the integral of 1/x and that the exponential is best defined as the inverse of the natural logarithm. Adopting Klein's suggestion was always a rather strange decision in the context of the average undergraduate calculus text. After all, no one felt the need to include a rigorous definition of the sine and cosine functions. Giving such a definition also requires doing some (rather hard) analysis, as one can see in Spivak's Calculus and in most standard real analysis texts. Furthermore, undergraduates arrive in college already "knowing" about these functions; it seems reasonable to simply recall their properties and use them, as we do for the trigonometric functions. The approach taken in this book is less radical than that. The authors define ax for rational x in the usual way (without bothering to prove the existence of n-th roots, of course), then extend the definition to irrational exponents using a limit argument (with at least some proofs). They then define the logarithm as the inverse function, and finally use the limit of (1+r/n)n to introduce the "natural" exponential function. (At this point of the text, not a word is said to justify that adjective!) The derivative of the logarithm function is obtained first, then the inverse function theorem is used to find the derivative of the exponential function. This approach does have the advantage of introducing a serious example of the use of limits into what is usually a dreary chapter. (Until we meet the derivative, why do we care what the limit is?) I believe it is an improvement on the "late transcendentals" approach. But in the end I still have my doubts whether students are ready for this kind of analysis so early in the course. For what it's worth, I would simply take for granted the existence of the logarithm and exponential functions and their basic properties. Then, using the definition of the derivative, it's fairly easy to prove that the derivative of ax is some constant times ax. Choosing the right value for a makes that constant equal to 1, which explains what is "natural" about ex . An analogous argument explains why we must use radian measure if we want the derivative of sin(x) to be cos(x) rather than a constant times cos(x). This seems both simpler and more illuminating, even if it hides the inherent trickiness of defining these functions. There's time for that in the analysis course. Fernando Q. Gouvêa is professor of mathematics at Colby College in Waterville, ME. Tuesday, May 30, 2006 Reviewable: Yes Include In BLL Rating: No Dale Varberg, Edwin J. Purcell, and Steven E. Rigdon Publication Date: 2007 Format: Hardcover Audience: Category: Textbook Tags: Fernando Q. Gouêa 05/30/2006 0 PRELIMINARIES 0.1 Real Numbers, Logic and Estimation 0.2 Inequalities and Absolute Values 0.3 The Rectangular Coordinate System 0.4 Graphs of Equations 1 FUNCTIONS 1.1 Functions and Their Graphs 1.2 Operations on Functions 1.3 Exponential and Logarithmic Functions 1.4 The Trigonometric Functions & Their Inverses 1.5 Chapter Review 2 LIMITS 2.1 Introduction to Limits 2.2 Rigorous Study of Limits 2.3 Limit Theorems 2.4 Limits Involving Transcendental Functions 2.5 Limits at Infinity, Infinite Limits 2.6 Continuity of Functions 2.7 Chapter Review 3 THE DERIVATIVE 3.1 Two Problems with One Theme 3.2 The Derivative 3.3 Rules for Finding Derivatives 3.4 Derivatives of Trigonometric Functions 3.5 The Chain Rule 3.6 Higher-Order Derivatives 3.7 Implicit Differentiation 3.8 Related Rates 3.9 Differentials and Approximations 3.10 Chapter Review 4 APPLICATIONS OF THE DERIVATIVE 4.1 Maxima and Minima 4.2 Monotonicity and Concavity 4.3 Local Extrema and Extrema on Open Intervals 4.4 Graphing Functions Using Calculus 4.5 The Mean Value Theorem for Derivatives 4.6 Solving Equations Numerically 4.7 Antiderivatives 4.8 Introduction to Differential Equations 5 THE DEFINITE INTEGRAL 5.1 Introduction to Area 5.2 The Definite Integral 5.3 The 1st Fundamental Theorem of Calculus 5.4 The 2nd Fundamental Theorem of Calculus and the Method of Substitution 5.5 The Mean Value Theorem for Integrals & the Use of Symmetry 5.6 Numerical Integration 5.7 Chapter Review 6 APPLICATIONS OF THE INTEGRAL 6.1 The Area of a Plane Region 6.2 Volumes of Solids: Slabs, Disks, Washers 6.3 Volumes of Solids of Revolution: Shells 6.4 Length of a Plane Curve 6.5 Work and Fluid Pressure 6.6 Moments, Center of Mass 6.8 Probability and Random Variables 6.8 Chapter Review 7 TECHNIQUES OF INTEGRATION & DIFFERENTIAL EQUATIONS 7.1 Basic Integration Rules 7.2 Integration by Parts 7.3 Some Trigonometric Integrals 7.4 Rationalizing Substitutions 7.5 The Method of Partial Fractions 7.6 Strategies for Integration 7.7 Growth and Decay 7.8 First-Order Linear Differential Equations 7.9 Approximations for Differential Equations 7.10 Chapter Review 8 INDETERMINATE FORMS & IMPROPER INTEGRALS 8.1 Indeterminate Forms of Type 0/0 8.2 Other Indeterminate Forms 8.3 Improper Integrals: Infinite Limits of Integration 8.4 Improper Integrals: Infinite Integrands 8.5 Chapter Review 9 INFINITE SERIES 9.1 Infinite Sequences 9.2 Infinite Series 9.3 Positive Series: The Integral Test 9.4 Positive Series: Other Tests 9.5 Alternating Series, Absolute Convergence, and Conditional Convergence 9.6 Power Series 9.7 Operations on Power Series 9.8 Taylor and Maclaurin Series 9.9 The Taylor Approximation to a Function 9.10 Chapter Review 10 CONICS AND POLAR COORDINATES 10.1 The Parabola 10.2 Ellipses and Hyperbolas 10.3 Translation and Rotation of Axes 10.4 Parametric Representation of Curves 10.5 The Polar Coordinate System 10.6 Graphs of Polar Equations 10.7 Calculus in Polar Coordinates 10.8 Chapter Review 11 GEOMETRY IN SPACE, VECTORS 11.1 Cartesian Coordinates in Three-Space 11.2 Vectors 11.3 The Dot Product 11.4 The Cross Product 11.5 Vector Valued Functions & Curvilinear Motion 11.6 Lines in Three-Space 11.7 Curvature and Components of Acceleration 11.8 Surfaces in Three Space 11.9 Cylindrical and Spherical Coordinates 11.10 Chapter Review 12 DERIVATIVES OF FUNCTIONS OF TWO OR MORE VARIABLES 12.1 Functions of Two or More Variables 12.2 Partial Derivatives 12.3 Limits and Continuity 12.4 Differentiability 12.5 Directional Derivatives and Gradients 12.6 The Chain Rule 12.7 Tangent Planes, Approximations 12.8 Maxima and Minima 12.9 Lagrange Multipliers 12.10 Chapter Review 13 MULTIPLE INTEGRATION 13.1 Double Integrals over Rectangles 13.2 Iterated Integrals 13.3 Double Integrals over Nonrectangular Regions 13.4 Double Integrals in Polar Coordinates 13.5 Applications of Double Integrals 13.6 Surface Area 13.7 Triple Integrals (Cartesian Coordinates) 13.8 Triple Integrals (Cyl & Sph Coordinates) 13.9 Change of Variables in Multiple Integrals 13.1 Chapter Review 14 VECTOR CALCULUS 14.1 Vector Fields 14.2 Line Integrals 14.3 Independence of Path 14.4 Green's Theorem in the Plane 14.5 Surface Integrals 14.6 Gauss's Divergence Theorem 14.7 Stokes's Theorem 14.8 Chapter Review APPENDIX A.1 Mathematical Induction A.2 Proofs of Several Theorems A.3 A Backward Look Publish Book: Modify Date: Tuesday, May 30, 2006
2014-07-31 17:41:58
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8375706076622009, "perplexity": 906.9336208764271}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273513.48/warc/CC-MAIN-20140728011753-00437-ip-10-146-231-18.ec2.internal.warc.gz"}
http://www.talkstats.com/threads/normal-approximation-to-the-binomial-distribution-help-needed.39201/
# Normal Approximation to the Binomial Distribution Help Needed #### torchflame ##### New Member SOLVED - Normal Approximation to the Binomial Distribution Help Needed A multiple choice test consists of a series of questions, each with four possible choices. a) If there are 60 questions, estimate the probability that a student guessing blindly on each question will get at least 30 right. b) How many questions are needed in order to be 99% confident that a student who guesses blindly will score no more than 35%? For a, I took this to ask P(X>=30;n=60,p=.25) Which would translate to X~N(15,11.25) So, P(X_N>=29.5) which is a z-score of 1.29, So 1-Phi(1.29)=1-.9015=.0985 But this seems too high to me. I think that it should be a very small probability, not nearly .1. And for b, I am completely lost. I tried P(x/n<=.35;n,p=.25)=.99 so that X~N(.25n,.1875n), but that gave me n=1.48, which makes literally no sense. What am I doing wrong? I've never quite understood this. Last edited: #### Dason ##### Ambassador to the humans Remember when you compute a z-score you want to divide by the standard deviation - not the variance. #### torchflame ##### New Member Oh, right! So with that z=4.32, so 1-Phi(4.32) is about 8*10^-6, according to my calculator, which seems much more right to me. Thanks! When I try b with this, I get something like 2.33=(.1n+.5)/Sqrt(.1875n), which tells me that n=.2731, which manages to make less sense than the previous answer. Is there something wrong with the setup of this? EDIT: So I looked at b again, and realized that the equation has two solutions, and the other, n=92, makes a lot more sense. Thanks! Last edited:
2021-12-02 16:51:50
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8655697703361511, "perplexity": 858.8984039483539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362230.18/warc/CC-MAIN-20211202145130-20211202175130-00483.warc.gz"}
https://socratic.org/questions/how-do-you-simplify-sqrt-75-4
# How do you simplify sqrt(75/4 )? Jun 11, 2018 $\setminus \frac{5 \sqrt{3}}{2}$ #### Explanation: When you need to simplify numerical root, you should look for these two key facts: • $\sqrt{{a}^{2}} = a$ • $\sqrt{a b} = \sqrt{a} \sqrt{b}$ So, anytime you have a number inside a root, you should try to write it as a product of other numbers, of which at least one is a perfect square. Let's analyze your case. First of all, using the second property, we can write $\sqrt{\frac{75}{4}} = \frac{\sqrt{75}}{\sqrt{4}}$ In fact, every fraction can be read as multiplication using $\setminus \frac{a}{b} = a \cdot \setminus \frac{1}{b}$ Now let's deal with the two roots separately: I'd start with the denominator, since we already have a perfect square under a square root, so they simplify (see first rule above): we have $\frac{\sqrt{75}}{\sqrt{4}} = \frac{\sqrt{75}}{2}$ As for $\sqrt{75}$, we can see that $75 = 25 \cdot 3$, and $25 = {5}^{2}$ is a perfect square. So, by the second rule above, we have $\sqrt{75} = \sqrt{25 \cdot 3} = \sqrt{25} \cdot \sqrt{3} = 5 \sqrt{3}$ $\sqrt{\frac{75}{4}} = \setminus \frac{5 \sqrt{3}}{2}$ But how do we find the most appropriate way to rewrite our number, in this case $75 = 25 \cdot 3$? You can use the prime factorization: $75 = 3 \cdot {5}^{2}$ and select only the primes with even exponent. In this case, ${5}^{2}$. Jun 11, 2018 $\pm \frac{5 \sqrt{3}}{2}$ #### Explanation: If you are stuck it is always worth have in a 'play' with numbers and see what comes up. Consider the 75. This is exactly divisible by 5 so $75 \div 5 = 15$ Thus we have: $\sqrt{\frac{5 \times 15}{4}}$ But 15 is the same as $3 \times 5$ giving: $\sqrt{\frac{5 \times 5 \times 3}{4}}$ $\frac{\sqrt{{5}^{2} \times 3}}{\sqrt{4}} = \pm \frac{5 \sqrt{3}}{2}$
2019-12-12 08:10:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 20, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9738005995750427, "perplexity": 298.36131693552784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540542644.69/warc/CC-MAIN-20191212074623-20191212102623-00302.warc.gz"}
https://www.doubtnut.com/question-answer-chemistry/what-will-be-the-heat-of-formation-of-methane-if-the-heat-of-combustion-of-carbon-is-x-kj-heat-of-fo-34506461
# What will be the heat of formation of methane, if the heat of combustion of carbon is '-x' kJ, heat of formation of water is '-y' kJ heat of combustion of methane is '-z' kJ? Updated On: 17-04-2022 Get Answer to any question, just click a photo and upload the photo and get the answer completely free, Watch 1000+ concepts & tricky questions explained! (-x-y+z)kJ(-z-x+2y)kJ(-x-2y-z)kJ(-x-2y+z)kJ
2022-05-28 14:01:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3478322923183441, "perplexity": 4018.305286469562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016853.88/warc/CC-MAIN-20220528123744-20220528153744-00678.warc.gz"}
https://iq.opengenus.org/building-java-application-using-apache-maven-command-line/
# Building a basic Java Application using Apache Maven (Command Line) #### software engineering Java Reading time: 30 minutes | Coding time: 15 minutes Apache Maven is a Configuration-oriented build tool used to build projects whose source code in written in Java. It can also be modified to work with C++ and Ruby language. Apache Maven automatically creates a hierarchy for source code, its unit test cases and after the build, it creates a target/ folder containing the .jar or .war file which is a platform independent binary file. Our aim is to create a basic Java Project from scratch. All the required source code files will be automatically handled by Maven and stored in src/main/ folder and built by using pre-defined Maven functions. Here, We create a simple Java Project with one source file App.java which will give the following output: Hello World ## Pre-requisite: Download Maven and set its environment Apache Maven is a free build tool. Its binary file can be downloaded from the Apache Maven Central Repository Since download file is a binary file and not an executable file, Apache Maven needs no installation. Its file must be in local machine and can be then used for building projects. Setting up Apache Maven environment in our local machine: 1. Save Apache Maven binary on any path you want on your local machine. It is a good practise to store the binary on C:/ directory root. Example: C://apache-maven-{version} 1. Open Command Prompt in Windows and type following command mvn -version Error message is displayed because Maven is not configured in our local machine. 1. Open Control Panel -> System and Security -> System Click on Advanced System Settings on the left side panel. System Properties Dialog Box opens. 1. Click on Environment Variables button. 2. In System Variables -> Click on New button. Specify following details Variable Name: M2_HOME Variable Value: {path of Maven Binary} In our case, Variable Value: C:\apache-maven-{version} 1. In System Variables -> Select path variable and click on Edit. Click on New button. Add the path specified in step 6 followed by bin/. Path: {Path of Maven binary}\bin Example: C:\apache-maven-{version}\bin 1. Open Command Prompt and type following command: mvn -version If we get the correct version message, then Maven is successfully configured on our local machine. Let us get started with our basic Java project using Maven. # Java Project using Maven ## Step 1: Create a Maven Project Open the command line on your machine and type the following command: mvn archetype:generate -DgroupId={group name} -DartifactId={project name} -DarchetypeArtifactId=maven-archetype-quickstart -DarchetypeVersion=1.4 -DinteractiveMode=false Here, • Group ID is the name of the group you want for your project. A group is basically a collection of projects. Provide any group name you want. Here, we use "Maven_Basics" as group name. • Artifact ID is the name of the current project you want to give. Here, we use "project1" as the project name. Note: By convention, you must always start project name with lowercase alphabet although you can use a captial letter or underscore as well. • All rest parameters in the command are used to set up hierarchies in the Maven project and automatically configure basic dependencies to used in the project. On first usage of the command, the Maven tool downloads all the configuration files from Central Maven Repository Implementation: mvn archetype:generate -DgroupId=Maven_Basics -DartifactId=project1 -DarchetypeArtifactId=maven-archetype-quickstart -DarchetypeVersion=1.4 -DinteractiveMode=false If we get a Build Success message, Maven has successfully created a hierarchy for out project. The following hierarchy is set: • Maven creates a folder named project1 which is the name of the project we gave in the first command. • In the project1/ folder, there is src/ folder and pom.xml file. • pom.xml is the main configuration file of Maven and it includes all the dependencies and metadata (data about data) for the the Maven project. • src/ folder consists of two subfolders: main/ and test/ • test/ folder is used to store unit test cases using JUnit tool etc. • main/ folder contains source code of java in the following path: {projectname}/src/main/java/{groupname} where groupname and projectname are specified in the first command for Maven. • The above path contains a sample Java program App.java ## Step 2: Write project source code in Java • Delete the sample App.java file. • Write your source code and paste it in the following path {projectname}/src/main/java/{groupname} Implementation: Here, we use the default sample file App.java which prints the "Hello World" string as output. package Maven_Basics; /** * Hello world! * */ public class App { public static void main( String[] args ) { System.out.println( "Hello World!" ); } } Here, • Package is the group name specified in step 1. • Class must be public because this source code executable is meant to be portable and platform independent. Note: cat command is used to see content of the file. ## Step 3: Clean the project environment • Maven creates output binary files and documentation reports in target/ folder. • target/ folder is created in the {projectname/} folder which consists of src/ and pom.xml file. • Before building a project, it is a good practise to clean the environment, i.e., to delete previous builds in target/ folder and remove its configuration file. • Using Maven, we can do this tedious task of finding and removing configurations with one simple command. Command: mvn clean ## Step 4: Compile the Java code • Maven uses following command to compile the Java code: mvn compile It compiles the Java code and gives one of the following outputs: 1. Build Failure: if the Java code fails to compile successfully. 2. Build Success: if Java code compiles successfully. If the build is successful, Maven creates the target/ folder and creates executable .class files for Java source code in target/classes/{groupname} folder. ## Step 5: Unit Testing for the source code For this step, you must know Unit Testing framework tool: Junit. Since, we are creating a simple project, we can ignore using test cases for our build. Since testing is important for build in Maven, we can use the sample AppTest.java file present in {projectname}/src/test/java/{groupname} which sets all test cases to true value which means successful test case. AppTest.java file package Maven_Basics; import static org.junit.Assert.assertTrue; import org.junit.Test; /** * Unit test for simple App. */ public class AppTest { /** * Rigorous Test :-) */ @Test { assertTrue( true ); } } Here, • Package is the group name of the project • Two libraries for JUnit tool are imported. • @Test is an optional annotation which will be ignored by the compiler. It tells the programmers that the following code includes test cases. Command: mvn test ## Step 6: Build the project source code file and create executable binary Command mvn install • Checks the configuration and dependecies from pom.xml file • Creates platform independent binary executable .jar or .war file in the target/ folder Here, the footer of the image shows the contents of target/ folder. The project1-1.0-SNAPSHOT.jar file is the platform-independent executable file which can be executed on any platform or OS like Windows/ Linux/ MacOs etc. ## Step 7: Creating Documentation and Report for Maven Build Command: mvn site • It uses plugins like Maven-Surefire etc. to create build reports using HTML and CSS • All the documentation and reports are stored in target/site/ folder. The summary of the site report is shown below: ## References Apache Maven: Source code and Documentation basics
2019-11-21 05:11:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1922878921031952, "perplexity": 9229.51112606881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670731.88/warc/CC-MAIN-20191121050543-20191121074543-00475.warc.gz"}
https://physics.stackexchange.com/questions/447237/why-is-the-integration-constant-in-a-purely-inductive-ac-circuit-assumed-to-be-z
# Why is the integration constant in a purely inductive AC circuit assumed to be zero? While studying about AC circuits, I came across the usual differential equation of an AC circuit (in which the voltage across the source is a sinusoidal function of time) containing an inductor only. After applying Kirchoffs Law and integrating, the book says that the integrating constant should be zero because as the voltage is sinusoidal and average of voltage over a single time period is zero the current is also expected to be sinusoidal with mean current over single time period, expected to be zero. This doesn't sound like a satisfactory reason to me. Can someone give a better reason than this? Any help will be appreciated. EDIT: I've got some really good answers in which many people pointed out that in reality the value of arbitrary constant doesn't matter because the circuit would have some resistance and after integration the solution would contain an exponentially decaying term multiplied by an arbitrary constant and that would decay out very quickly. But unfortunately that still doesn't answers my question completely. Suppose i have an Inductance, a resistance and a source of AC with a fixed peak voltage and I connect them via a switch which I keep open. As soon as I close the switch there will be a current in the circuit, which will depend on that arbitrary constant of integration. I can measure the initial current and there should be a unique value of initial current for a given value of Inductance, Resistance, Frequency and Voltage of AC source, I want to get an expression that would PREDICT this initial current (and therefore should not include and arbitrary constant). What will be that expression? In a problem like this, where there is no natural starting time for the analysis, we are usually looking for the behavior of the system after it has been operating for a "long time", and any initial stored energy has decayed away. In a real circuit, if you started the system at some time $$t_0$$ with an initial current through the resistor, that current would decay away over time (realistically, within a few seconds at most) due to the resistance of the wires (which we're otherwise ignoring here). So, to have had a nonzero current (above the current due to the source), it would have had to have started with an even higher current at some time before $$t=0$$. If the system has really been running since $$t=-\infty$$, it would have had to have started with an infinite initial current to still have a finite current $$C$$ at $$t=0$$. Since there's nothing in the problem to say when the source was first applied or how long it has been running, and we know it didn't actually start in the distant past with an infinite initial current, we assume that in this problem $$C=0$$. Probably one of the next things you'll study after this kind of problem is circuits that have switches in them that change the configuration at some instant in time, which gives us a natural point in time to define as $$t=0$$, and which will lead to nonzero constants of integration when they are analyzed. You have set up a mathematical model of an ideal system and on solving the first order differential equation you get a result for the current which has one constant of integration. You are quite correct that in theory that constant of integration can have any value but the book (author) now switches back to the real world and asks "what is the problem with assigning a non-zero value to the constant of integration?" The simple answer is that this is a situation which will never happen because the circuit will always have resistance (and capacitance) no matter how small and even if there was an "extra" current equal to the integration constant at a certain time, that current would not persist and decay away over time. As an example would you expect the current in the circuit to be $$I(t) = 999.999+0.001 \cos (\omega\,t)$$? The mathematics says that it can but in the real world it cannot. Advance your mathematical model one step and include relatively small amount of resistance $$(\omega L \gg R)$$ in the circuit with $$I(0) = 0.1$$. Solve the differential equation and see what happens over time. The solution will be in two parts with one part decaying over time and the other part being oscillatory with frequency $$\omega$$. As an example look at the solution of $$100\dot I+I = \sin(t)$$ with $$I(0) =0.1$$ $$I(t)= 0.109999\,{\rm e}^{-t/1000}+0.00009999\sin(t)-0.009999 \cos(t)$$ with the left-hand graph of current against time Here you can see the decay of the initial current of $$0.1$$ towards the approximately $$-0.01\cos(\omega t)$$ steady state behaviour. The right-hand graph includes, in red, the solution to $$100\dot I = \sin(t)$$ with $$I(0) =-0.01$$, ie no resistance in the circuit, and you can see that for times greater than $$t \approx 500$$ the two solutions are very similar. • Thanks, I suppose you're saying that integrating constant doesn't matter (in physical sense) because in reality in any circiut having finite resistance that term dies out pretty quickly. So the solution I've got after putting c = 0 is close to the real solution. – Shivansh J Dec 14 '18 at 16:38 • Pardon me, but still I've got a doubt. Is there's no way to determine what will be the true value of that constant? Obviously in real world, suppose I want to know the current with the best precision possible. (So I don't want to neglect that constant), then for a given value to Inductance, frequency of AC source and Resistance there should be unique value of C. (Which I can find by measuring current at t=0), Is there's no way to PREDICT this value using physics? – Shivansh J Dec 14 '18 at 16:43 • @ShivanshJ “close to the real solution” is the steady state solution which will be close to what might expect to get as the result of doing an experiment in the laboratory. – Farcher Dec 14 '18 at 16:43 • @ShivanshJ The prediction would depend on knowing the applied voltage (or current) at a given time. – Farcher Dec 14 '18 at 16:46 • By real solution I mean that perfect equation which would predict the current at t = 0 and would also give steady state solution by putting time t = infinity – Shivansh J Dec 14 '18 at 16:47
2019-03-19 08:34:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7913241386413574, "perplexity": 259.6509899587186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201922.85/warc/CC-MAIN-20190319073140-20190319095140-00218.warc.gz"}
https://pqnelson.wordpress.com/2012/06/19/thinking-infinitesimally/
## Thinking “Infinitesimally” 1. SO I’d like to reiterate the intuitive picture one should have when working with calculus. We should think of a differential $\mathrm{d}x$ as a “really small” change in $x$…well, it’s the “smallest” possible change! The reason I bring this up, well, just to reiterate the point when we consider $y=f(x)$ graphed in a neighborhood $x_{0}-\Delta x\leq x\leq x_{0}+\Delta x$, as $\Delta x$ gets smaller…the curve more closely resembles a line. This is the tangent line to the point $(x_{0},f(x_{0}))$. Mixing notations, if $y=f(x)$, we obtain the differential quantity $\mathrm{d}y = f'(x)\,\mathrm{d}x$. The intuition should be: when I look at the “microscopic” scale, the curve “behaves linearly.” 2. Example. Lets consider the sine function. The linear approximation near $x=0$ would be $t(x)=x$. So if we plot the sine function in light blue, the approximation dashed in red, for values of $x$ between $-\pi/4\leq x\leq\pi/4$, we have the following graph: Observe the linear approximation begins to differ from the actual function near the boundaries of the domain. Look, we can even consider the area between these curves! We see this is the cumulative error, and it is (1)$\displaystyle E = \int^{\pi/4}_{0}\bigl(\sin(x)-x\bigr)\,\mathrm{d}x = \left.-\cos(x)-\frac{x^{2}}{2}\right|^{\pi/4}_{0}$ Evaluating the limits gives us (2)$\displaystyle E = \frac{-\sqrt{2}}{2}-\frac{\pi^{2}}{32}+1\approx -0.0155319187$ This is just for the values of $x\geq0$. We double this quantity to get the total error of our approximation, in the sense that this describes the area between the two curves. 3. If we “zoom in” more, considering the domain $-\pi/8\leq x\leq\pi/8$, then our graph becomes The cumulative error in this case becomes (3)$\displaystyle E = -\cos(\pi/8)-\frac{\pi^{2}}{128}+1\approx -0.000985817$ Observe our linear approximation now works better than before! Since the error is greatest at the boundary, we see (4)$\displaystyle \sin(\pi/4)-\pi/4\approx 0.078291$ and $\displaystyle \sin(\pi/8)-\pi/8\approx 0.010015$ The punchline is: as we zoom in closer and closer to the curve, it appears to more and more resemble a line. 4. Microscopic versus Macroscopic. If we consider quantities $\mathrm{d}x$ as “microscopic”, then what’s a “macroscopic” quantity? It’s simply a finite number, i.e., one without “$\mathrm{d}$”s of any sort.
2018-01-17 12:32:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 21, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9432356953620911, "perplexity": 540.9439174639984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886939.10/warc/CC-MAIN-20180117122304-20180117142304-00230.warc.gz"}
https://brilliant.org/problems/circle-in-a-trapezium/
# Circle in a trapezium Geometry Level 3 A circle is inscribed in an isosceles trapezium, as shown below. The two blue sides have lengths 8 and 18, respectively. What is the radius of the circle? ×
2017-12-12 06:47:19
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8955835700035095, "perplexity": 454.9935179403497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948515309.5/warc/CC-MAIN-20171212060515-20171212080515-00400.warc.gz"}
http://insidechem.blogspot.com/2017/04/charless-law.html
Charles's law (also known as the law of volumes) is an experimental gas law that describes how gases tend to expand when heated. A modern statement of Charles's law is: When the pressure on a sample of a dry gas is held constant, the Kelvin temperature and the volume will be directly related.[1] This directly proportional relationship can be written as: ${\displaystyle V\propto T}$ or ${\displaystyle {\frac {V}{T}}=k,}$ where: V is the volume of the gas, T is the temperature of the gas (measured in kelvins), k is a constant. This law describes how a gas expands as the temperature increases; conversely, a decrease in temperature will lead to a decrease in volume. For comparing the same substance under two different sets of conditions, the law can be written as: ${\displaystyle {\frac {V_{1}}{T_{1}}}={\frac {V_{2}}{T_{2}}}\qquad {\text{or}}\qquad {\frac {V_{2}}{V_{1}}}={\frac {T_{2}}{T_{1}}}\qquad {\text{or}}\qquad V_{1}T_{2}=V_{2}T_{1}.}$ The equation shows that, as absolute temperature increases, the volume of the gas also increases in proportion. Source: Wikipedia MORE NEWS FROM THE WEB
2018-05-28 01:13:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8252727389335632, "perplexity": 582.4492907953614}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794870604.65/warc/CC-MAIN-20180528004814-20180528024814-00177.warc.gz"}
https://schulte-mecklenbeck.com/post/2017-11-13-the-root-of-the-problem/
The root of the problem One of the root causes of where we are (as a science) in psychology and many other disciplines in terms of reproducibility of key (and other) results could not be better summed up than by the man himself Daryl Bem (2002): “If a datum suggests a new hypothesis, try to find additional evidence for it elsewhere in the data. If you see dim traces of interesting patterns, try to reorganize the data to bring them into bolder relief. If there are participants you don’t like, or trials, observers, or interviewers who gave you anomalous results, drop them (temporarily). Go on a fishing expedition for something — anything — interesting. ” ‘Go on a fishing expidition’ – why should there come anything good from such advise? Bem goes on … “No, this is not immoral (SIC!). The rules of scientific and statistical inference that we overlearn in graduate school apply to the “Context of Justification.” They tell us what we can conclude in the articles we write for public consumption, and they give our readers criteria for deciding whether or not to believe us. But in the “Context of Discovery,” there are no formal rules, only heuristics or strategies.” I disagree with this statement, because the idea of finding something through torturing the data (until they confess) is a hug source of false positive results. We find an effect and falsely conclude that something is there when in fact there is nothing. I found the above quote when reading this paper by Zwaan, Etz, Lucas & Donnellan (2017) – a target article for BBS which presents six common arguments against replication and a set of really good responses for such discussions. Here are the six ‘concerns’ the authors discuss: Concern I: Context Is Too Variable Concern II: The Theoretical Value of Direct Replications is Limited Concern III: Direct Replications Are Not Feasible in Certain Domains Concern IV: Replications are a Distraction Concern V: Replications Affect Reputations Concern VI: There is no Standard Method to Evaluate Replication Results <p> Both are really good reads &#8211; for very different reasons. </p> <p> &nbsp; </p> <p> &#8212; </p> </div> </div> Bern, D. (2002). Writing the empirical journal article. In Darley, J. M., Zanna, M. P., & Roediger III, H. L. (Eds) (2002). The Compleat Academic: A Career Guide. Washington, DC: American Psychological Association. Zwaan, R. A., Etz, A., Lucas, R. E., & Donnellan, B. (2017, November 1). Making Replication Mainstream. Retrieved from psyarxiv.com/4tg9c Michael Schulte-Mecklenbeck Associate Professor I study human decision making, process tracing methods, food choice and traffic behavior.
2020-12-01 14:40:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4078008830547333, "perplexity": 4293.307088759136}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141674594.59/warc/CC-MAIN-20201201135627-20201201165627-00420.warc.gz"}
https://support.sas.com/documentation/cdl/en/lrdict/64316/HTML/default/a000270634.htm
Functions and CALL Routines # PDF Function Returns a value from a probability density (mass) distribution. Category: Probability Alias: PMF ## Syntax PDF (dist,quantile<,parm-1, ... ,parm-k>) ### Arguments dist is a character constant, variable, or expression that identifies the distribution. Valid distributions are as follows: Distribution Argument Bernoulli BERNOULLI Beta BETA Binomial BINOMIAL Cauchy CAUCHY Chi-Square CHISQUARE Exponential EXPONENTIAL F F Gamma GAMMA Geometric GEOMETRIC Hypergeometric HYPERGEOMETRIC Laplace LAPLACE Logistic LOGISTIC Lognormal LOGNORMAL Negative binomial NEGBINOMIAL Normal NORMAL|GAUSS Normal mixture NORMALMIX Pareto PARETO Poisson POISSON T T Uniform UNIFORM Wald (inverse Gaussian) WALD|IGAUSS Weibull WEIBULL Note:   Except for T, F, and NORMALMIX, you can minimally identify any distribution by its first four characters. quantile is a numeric constant, variable, or expression that specifies the value of the random variable. parm-1,...,parm-k are optional numeric constants, variables, or expressions that specify the values of shape, location, or scale parameters that are appropriate for the specific distribution. See: Details for complete information about these parameters Details ## Syntax PDF('BERNOULLI',x,p) where x is a numeric random variable. p is a numeric probability of success. Range: 0 p 1 The PDF function for the Bernoulli distribution returns the probability density function of a Bernoulli distribution, with probability of success equal to p. The PDF function is evaluated at the value x. The equation follows: Note:   There are no location or scale parameters for this distribution. ## Syntax PDF('BETA',x,a,b<,l,r>) where x is a numeric random variable. a is a numeric shape parameter. Range: a > 0 b is a numeric shape parameter. Range: b > 0 l is the numeric left location parameter. Default: 0 r is the right location parameter. Default: 0 Range: r > l The PDF function for the beta distribution returns the probability density function of a beta distribution, with shape parameters a and b. The PDF function is evaluated at the value x. The equation follows: Note:   The quantity is forced to be . ## Syntax PDF('BINOMIAL',m,p,n) where m is an integer random variable that counts the number of successes. Range: m = 0, 1, ... p is a numeric probability of success. Range: 0 p 1 n is an integer parameter that counts the number of independent Bernoulli trials. Range: n = 0, 1, ... The PDF function for the binomial distribution returns the probability density function of a binomial distribution, with parameters p and n, which is evaluated at the value m. The equation follows: Note:   There are no location or scale parameters for the binomial distribution. ## Syntax PDF('CAUCHY',x<,,>) where x is a numeric random variable. is a numeric location parameter. Default: 0 is a numeric scale parameter. Default: 1 Range: > 0 The PDF function for the Cauchy distribution returns the probability density function of a Cauchy distribution, with the location parameter and the scale parameter . The PDF function is evaluated at the value x. The equation follows: ## Syntax PDF('CHISQUARE',x,df <,nc>) where x is a numeric random variable. df is a numeric degrees of freedom parameter. Range: df > 0 nc is an optional numeric non-centrality parameter. Range: nc 0 The PDF function for the chi-square distribution returns the probability density function of a chi-square distribution, with df degrees of freedom and non-centrality parameter nc. The PDF function is evaluated at the value x. This function accepts non-integer degrees of freedom. If nc is omitted or equal to zero, the value returned is from the central chi-square distribution. The following equation describes the PDF function of the chi-square distribution, where pc(.,.) denotes the density from the central chi-square distribution: and where pg(y,b) is the density from the gamma distribution, which is given by ## Syntax PDF('EXPONENTIAL',x <,>) where x is a numeric random variable. is a scale parameter. Default: 1 Range: > 0 The PDF function for the exponential distribution returns the probability density function of an exponential distribution, with the scale parameter . The PDF function is evaluated at the value x. The equation follows: ## Syntax PDF('F',x,ndf,ddf<,nc>) where x is a numeric random variable. ndf is a numeric numerator degrees of freedom parameter. Range: ndf > 0 ddf is a numeric denominator degrees of freedom parameter. Range: ddf > 0 nc is a numeric non-centrality parameter. Range: nc 0 The PDF function for the F distribution returns the probability density function of an F distribution, with ndf numerator degrees of freedom, ddf denominator degrees of freedom, and non-centrality parameter nc. The PDF function is evaluated at the value x. This PDF function accepts non-integer degrees of freedom for ndf and ddf. If nc is omitted or equal to zero, the value returned is from a central F distribution. In the following equation, let $\nu_1$ = ndf, let $\nu_2$ = ddf, and let $\lambda$ = nc. The following equation describes the PDF function of the F distribution. where pf(f,u1,u2) is the density from the central F distribution with and where pB(x,a,b) is the density from the standard beta distribution. Note:   There are no location or scale parameters for the F distribution. ## Syntax PDF('GAMMA',x,a<,>) where x is a numeric random variable. a is a numeric shape parameter. Range: a > 0 is a numeric scale parameter. Default: 1 Range: > 0 The PDF function for the gamma distribution returns the probability density function of a gamma distribution, with the shape parameter a and the scale parameter . The PDF function is evaluated at the value x. The equation follows: ## Syntax PDF('GEOMETRIC',m,p) where m is a numeric random variable that denotes the number of failures before the first success. Range: m 0 p is a numeric probability of success. Range: 0 p 1 The PDF function for the geometric distribution returns the probability density function of a geometric distribution, with parameter p. The PDF function is evaluated at the value m. The equation follows: Note:   There are no location or scale parameters for this distribution. ## Syntax PDF('HYPER',x,N,R,n<,o>) where x is an integer random variable. N is an integer population size parameter. Range: N = 1, 2, ... R is an integer number of items in the category of interest. Range: R = 0, 1, ..., N n is an integer sample size parameter. Range: n = 1, 2, ..., N o is an optional numeric odds ratio parameter. Range: o > 0 The PDF function for the hypergeometric distribution returns the probability density function of an extended hypergeometric distribution, with population size N, number of items R, sample size n, and odds ratio o. The PDF function is evaluated at the value x. If o is omitted or equal to 1, the value returned is from the usual hypergeometric distribution. The equation follows: ## Syntax PDF('LAPLACE',x<,,>) where x is a numeric random variable. is a numeric location parameter. Default: 0 is a numeric scale parameter. Default: 1 Range: > 0 The PDF function for the Laplace distribution returns the probability density function of the Laplace distribution, with the location parameter and the scale parameter . The PDF function is evaluated at the value x. The equation follows: ## Syntax PDF('LOGISTIC',x<,,>) where x is a numeric random variable. is a numeric location parameter. Default: 0 is a numeric scale parameter. Default: 1 Range: > 0 The PDF function for the logistic distribution returns the probability density function of a logistic distribution, with the location parameter and the scale parameter . The PDF function is evaluated at the value x. The equation follows: ## Syntax PDF('LOGNORMAL',x<,,>) where x is a numeric random variable. specifies a numeric log scale parameter. (exp() is a scale parameter.) Default: 0 specifies a numeric shape parameter. Default: 1 Range: > 0 The PDF function for the lognormal distribution returns the probability density function of a lognormal distribution, with the log scale parameter and the shape parameter . The PDF function is evaluated at the value x. The equation follows: ## Syntax PDF('NEGBINOMIAL',m,p,n) where m is a positive integer random variable that counts the number of failures. Range: m= 0, 1, ... p is a numeric probability of success. Range: 0 p 1 n is a numeric value that counts the number of successes. Range: n > 0 The PDF function for the negative binomial distribution returns the probability density function of a negative binomial distribution, with probability of success p and number of successes n. The PDF function is evaluated at the value m. The equation follows: Note:   There are no location or scale parameters for the negative binomial distribution. ## Syntax PDF('NORMAL',x<,,>) where x is a numeric random variable. is a numeric location parameter. Default: 0 is a numeric scale parameter. Default: 1 Range: > 0 The PDF function for the normal distribution returns the probability density function of a normal distribution, with the location parameter and the scale parameter . The PDF function is evaluated at the value x. The equation follows: ## Syntax PDF('NORMALMIX',x,n,p,m,s) where x is a numeric random variable. n is the integer number of mixtures. Range: n = 1, 2, ... p is the n proportions, , where . Range: p = 0, 1, ... m is the n means . s is the n standard deviations . Range: s > 0 The PDF function for the normal mixture distribution returns the probability that an observation from a mixture of normal distribution is less than or equal to x. The equation follows: Note:   There are no location or scale parameters for the normal mixture distribution. ## Syntax PDF('PARETO',x,a<,k>) where x is a numeric random variable. a is a numeric shape parameter. Range: a > 0 k is a numeric scale parameter. Default: 1 Range: k > 0 The PDF function for the Pareto distribution returns the probability density function of a Pareto distribution, with the shape parameter a and the scale parameter k. The PDF function is evaluated at the value x. The equation follows: ## Syntax PDF('POISSON',n,m) where n is an integer random variable. Range: n= 0, 1, ... m is a numeric mean parameter. Range: m > 0 The PDF function for the Poisson distribution returns the probability density function of a Poisson distribution, with mean m. The PDF function is evaluated at the value n. The equation follows: Note:   There are no location or scale parameters for the Poisson distribution. ## Syntax PDF('T',t,df<,nc>) where t is a numeric random variable. df is a numeric degrees of freedom parameter. Range: df > 0 nc is an optional numeric non-centrality parameter. The PDF function for the T distribution returns the probability density function of a T distribution, with degrees of freedom df and non-centrality parameter nc. The PDF function is evaluated at the value x. This PDF function accepts non-integer degrees of freedom. If nc is omitted or equal to zero, the value returned is from the central T distribution. In the following equation, let $\nu$ = df and let $\delta$ = nc. Note:   There are no location or scale parameters for the T distribution. ## Syntax PDF('UNIFORM',x<,l,r>) where x is a numeric random variable. l is the numeric left location parameter. Default: 0 r is the numeric right location parameter. Default: 1 Range: r > l The PDF function for the uniform distribution returns the probability density function of a uniform distribution, with the left location parameter l and the right location parameter r. The PDF function is evaluated at the value x. The equation follows: ## Syntax PDF('WALD',x,d) PDF('IGAUSS',x,d) where x is a numeric random variable. d is a numeric shape parameter. Range: d > 0 The PDF function for the Wald distribution returns the probability density function of a Wald distribution, with shape parameter d, which is evaluated at the value x. The equation follows: Note:   There are no location or scale parameters for the Wald distribution. ## Syntax PDF('WEIBULL',x,a<,>) where x is a numeric random variable. a is a numeric shape parameter. Range: a > 0 is a numeric scale parameter. Default: 1 Range: > 0 The PDF function for the Weibull distribution returns the probability density function of a Weibull distribution, with the shape parameter a and the scale parameter . The PDF function is evaluated at the value x. The equation follows: SAS Statements Results y=pdf('BERN',0,.25); 0.75 y=pdf('BERN',1,.25); 0.25 y=pdf('BETA',0.2,3,4); 1.2288 y=pdf('BINOM',4,.5,10); 0.20508 y=pdf('CAUCHY',2); 0.063662 y=pdf('CHISQ',11.264,11); 0.081686 y=pdf('EXPO',1); 0.36788 y=pdf('F',3.32,2,3); 0.054027 y=pdf('GAMMA',1,3); 0.18394 y=pdf('HYPER',2,200,50,10); 0.28685 y=pdf('LAPLACE',1); 0.18394 y=pdf('LOGISTIC',1); 0.19661 y=pdf('LOGNORMAL',1); 0.39894 y=pdf('NEGB',1,.5,2); 0.25 y=pdf('NORMAL',1.96); 0.058441 y=pdf('NORMALMIX',2.3,3,.33,.33,.34, .5,1.5,2.5,.79,1.6,4.3); 0.1166 y=pdf('PARETO',1,1); 1 y=pdf('POISSON',2,1); 0.18394 y=pdf('T',.9,5); 0.24194 y=pdf('UNIFORM',0.25); 1 y=pdf('WALD',1,2); 0.56419 y=pdf('WEIBULL',1,2); 0.73576 Functions: Previous Page | Next Page | Top of Page
2021-08-04 03:45:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6495848298072815, "perplexity": 1475.0866943160452}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154500.32/warc/CC-MAIN-20210804013942-20210804043942-00671.warc.gz"}
https://www.physicsforums.com/threads/numbers-for-which-it-coverges.389174/
# Numbers for which it coverges zeion ## Homework Statement Find the positive numbers r for which $$\sum \frac{r^k}{k^r}$$ converges. ## The Attempt at a Solution What method do I use for this? $$\frac {a_{k+1}}{a_k} = \frac {r^{k+1}}{(k+1)^r} \cdot \frac{k^r}{r^k} = r \cdot (\frac{1}{k})^{\frac{r}{k}}$$ Mentor ## Homework Statement Find the positive numbers r for which $$\sum \frac{r^k}{k^r}$$ converges. ## The Attempt at a Solution What method do I use for this? $$\frac {a_{k+1}}{a_k} = \frac {r^{k+1}}{(k+1)^r} \cdot \frac{k^r}{r^k} = r \cdot (\frac{1}{k})^{\frac{r}{k}}$$ The second and third expressions aren't equal... zeion Ok so I have $$\frac {a_{k+1}}{a_k} = \frac {r^{k+1}}{(k+1)^r} \cdot \frac{k^r}{r^k} = r \cdot \frac{k^r}{(k+1)^r } = r \cdot (\frac{k}{(k+1)})^r$$ ? zeion then $$r \cdot (1 - \frac{1}{k+1})^r \to r \cdot 1^r$$ ? zeion then r needs to be < 1? Mentor What if r = -200? That's less than 1. zeion I thought if the ratio is < 1 then it converges? Mentor The ratio test is usually given in terms of absolute values. IOW, $$\lim_n \frac{|a_{n + 1}|}{|a_n|}$$ For this problem it's given that r is a positive number, so since the ratio of those terms is less than 1, the series converges. zeion So how do I know the ratio is less than 1? Mentor I thought you already knew that. What I thought you did was to evaluate this limit: $$\lim_{k \to \infty} \frac {a_{k+1}}{a_k} = \lim_{k \to \infty} \frac {r^{k+1}}{(k+1)^r} \cdot \frac{k^r}{r^k}= \lim_{k \to \infty} r \cdot \frac{k^r}{(k+1)^r } = \lim_{k \to \infty} r \cdot (\frac{k}{(k+1)})^r$$ What do you get for this limit? And what does that value imply for the test you are using? zeion I get $$r \cdot 1^r = r$$? And r is positive Mentor zeion said: Find the positive numbers r for which $$\sum \frac{r^k}{k^r}$$ converges. From your work, what are the values of r that cause this series to converge? zeion I need the ratio to approach something < 1? So since r is positive I need it to be 0 < r < 1? Mentor C'mon, show some confidence. Tell me, don't ask me. Mentor I need the ratio to approach something < 1? So since r is positive I need it to be 0 < r < 1? C'mon, show a little confidence in your ability. Tell me, don't ask me.
2022-08-11 20:54:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6775602102279663, "perplexity": 879.7460465064745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571502.25/warc/CC-MAIN-20220811194507-20220811224507-00704.warc.gz"}
https://mathematica.stackexchange.com/questions/148517/finding-the-electric-field-surrounding-a-torus?answertab=oldest
# Finding the electric field surrounding a torus [closed] I'm trying to use NDSolve to get a notion of the electric field around a uniform torus of density 1 (for simplicity) using Maxwell's equations. I'm running into some trouble. R = 3; r = 1; densityDist[x_, y_, z_] := If[(R - Sqrt[x^2 + y^2])^2 + z^2 < r^2, 1, 0]; Ef[x_, y_, z_] = Module[{Ef}, Ef[x_, y_, z_] := {Ex[x, y, z], Ey[x, y, z], Ez[x, y, z]}; Ef[x, y, z] /. NDSolve[ {Div[Ef[x, y, z], {x, y, z}] == densityDist[x, y, z], Ef[0, 0, 0] == {0, 0, 0}}, Ef[x, y, z], {x, -6, 6}, {y, -6, 6}, {z, -6, 6}]] But I get NDSolve::underdet: There are more dependent variables, {Ex[x, y, z], Ey[x, y, z], Ez[x, y, z]}, than equations, so the system is underdetermined. Any clues of what other conditions I can put in? ## closed as off-topic by m_goldberg, MarcoB, garej, mikado, LLlAMnYPJun 19 '17 at 9:44 This question appears to be off-topic. The users who voted to close gave these specific reasons: • "The question is out of scope for this site. The answer to this question requires either advice from Wolfram support or the services of a professional consultant." – MarcoB, garej, LLlAMnYP • "This question arises due to a simple mistake such as a trivial syntax error, incorrect capitalization, spelling mistake, or other typographical error and is unlikely to help any future visitors, or else it is easily found in the documentation." – m_goldberg, mikado If this question can be reworded to fit the rules in the help center, please edit the question. What happened to Maxwell's other equations? You're missing $\nabla \times E = 0$ assuming there is no $B$ field. R = 3; r = 1; densityDist[x_, y_, z_] := If[(R - Sqrt[x^2 + y^2])^2 + z^2 < r^2, 1, 0]; Ef[x_, y_, z_] = {Ex[x, y, z], Ey[x, y, z], Ez[x, y, z]}; Then we can solve the equations as follows with some particular boundary conditions: maxwell = {Div[Ef[x, y, z], {x, y, z}] == 0 densityDist[x, y, z]}~ Join~(# == 0 & /@ Curl[Ef[x, y, z], {x, y, z}])[[;; 2]]; NDSolve[maxwell~ Join~{Ex[-10, y, z] == 0, Ey[-10, y, z] == 0, Ez[-10, y, z] == 0}, {Ex, Ey, Ez}, {x, -10, 10}, {y, -10, 10}, {z, -10, 10}] For this problem however it would be better to take advantage of the symmetries and use spherical coordinates, then you can impose a vanishing boundary conditions at a large value of $r$. It is easy to see what is happening if you look at what Div[{Ex[x, y, z], Ey[x, y, z], Ez[x, y, z]}, {x, y, z}] == densityDist[x, y, z] evaluates to. R = 3; r = 1; densityDist[x_, y_, z_] = Boole[(R - Sqrt[x^2 + y^2])^2 + z^2 < r^2]; Div[{Ex[x, y, z], Ey[x, y, z], Ez[x, y, z]}, {x, y, z}] == densityDist[x, y, z] Derivative[0, 0, 1][Ez][x, y, z] + Derivative[0, 1, 0][Ey][x, y, z] + Derivative[1, 0, 0][Ex][x, y, z] == Boole[(3 - Sqrt[x^2 + y^2])^2 + z^2 < 1] Since you have done nothing to tell Mathematica that the partial derivatives are vectors rather than scalars, Mathematica simply sees one equation in thee unknowns. Hence, the error message. You need add equations the make the vector nature of the derivatives clear.
2019-11-14 23:48:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18608549237251282, "perplexity": 3269.3776747510774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668544.32/warc/CC-MAIN-20191114232502-20191115020502-00139.warc.gz"}
https://mathoverflow.net/questions/385955/how-can-we-make-precise-the-notion-that-a-finite-dimensional-vector-space-is-not/385981
# How can we make precise the notion that a finite-dimensional vector space is not canonically isomorphic to its dual via category theory? There are quite a few questions both on this site and math.SE related to this topic as well as what we mean when we say "natural" or "canonical". For the purposes of this question, I'm going to consider canonical and natural to be synonyms, and use wikipedia's definition of an unnatural isomorphism: A particular map between particular objects may be called an unnatural isomorphism (or "this isomorphism is not natural") if the map cannot be extended to a natural transformation on the entire category. There's an excellent question here: https://math.stackexchange.com/q/622589/816960 which raises the issue that it seems to be an artefact of the definition of a natural transformation that there is no canonical isomorphism between $$V$$ and $$V^*$$, since the dual functor is contravariant. One of the answers suggests that the only way to resolve this is by showing that every dinatural transformation from the identity functor to the dual functor must be zero. Personally I feel this doesn't get around the issue, because it seems to require us to redefine a canonical isomorphism between objects in a category as one which can be extended to a nontrivial dinatural transformation, and abandon the definition given above. Otherwise we're still left with a "definition not applicable"-based proof. Another candidate is given here: https://mathoverflow.net/a/345148/175537 in which we change the definition of the dual functor. One way to get around this is by working instead with the core groupoid $$\mathbf{Vect}_{core}$$, consisting of vector spaces and invertible linear transformations, and defining $$*:\mathbf{Vect}_{core} \to \mathbf{Vect}_{core}$$ to be the functor taking $$f:V \to W$$ to $$(f^{-1})^{\ast}: V^\ast \to W^\ast$$, the linear adjoint of its inverse. Then one can ask whether the identity is naturally isomorphic to the covariant dual functor $$\ast$$. It is not. But it seems like in both cases, we need a different definition of something to prove the nonexistence of a canonical isomorphism. My questions are: Can we prove that any isomorphism between a finite-dimensional vector space and its dual is unnatural using the definition above, without appealing to the fact that the definition of a natural transformation doesn't allow comparisons of covariant functors with contravariant ones? Since I suspect the answer to the above is "no", does this mean the definition of "unnatural isomorphism" isn't quite right? what is the right definition of "canonical isomorphism" to use in order to do this properly with category-theoretic machinery? • Okay, I misunderstood your question at first. But now I'm not sure I get it. You say "cannot be extended to a natural transformation". But a natural transformation is something between functors. So are you really asking if the map cannot be extended into a triple (functor, functor, natural transformation)? Do you have any requirement on the functors? For example, between what categories...? Mar 9, 2021 at 14:12 • @TimCampion It does come up (e.g. in the MSE question), and the result is that any dinatural transformation between the identity and the dual functors is zero. Mar 9, 2021 at 14:14 • What do you call the dual functor? The contravariant one? Then there can't be a natural isomorphism by definition, or a dinatural isomorphism by the MSE answer you linked. The inverse of the dual, restricted to the core groupoid? Then it's also done in the MSE answer you've linked. Mar 9, 2021 at 15:03 • I don't see any future in attempting to define an "unnatural isomorphism". Consider the positive question instead: any isomorphism (indeed homomorphism) $V\to V^*$ is a structure, otherwise written $\mu:V\otimes V\to K$. Then $(V,\mu)$ and $(V,\mu')$ are different mathematical objects. Mar 9, 2021 at 15:55 • You could consider the category "Pair" of pairs $(V, \mu)$ where $V$ is a vector space and $\mu: V \otimes V \to K$ is a non-degenerate pairing (i.e. an isom $V \cong V^*$. A morphism is a map of vector spaces compatible with pairings. There is a forgetful functor from Pair to Vect which forgets $\mu$. You can ask: Does the forgetful functor admit a section? This would be a functorial choice of isom $V \cong V^*$ for each $V$. The answer is no, there is no such section. Mar 9, 2021 at 18:00 Some people (including me) think that "canonical" should be synonymous with "natural on isomorphisms" or "Functorial on isomorphisms" (depending on if you are talking of a "canonical object" or a "canonical arrow"). Doing so solves the problem of variance in the definition. To be clear, by "Functorial on isomorphism" I justs means that we have a functor $$F:Core(C) \to D$$ where $$Core(C)$$ is the subcategory of $$C$$ containing all objects but only isomorphisms as arrows. And by "Natural on isomorphisms", I'm asking for things that are natural transformation between such functors, i.e. that satisfies the naturality condition only with respect to isomorphisms. If you look at the category of finite dimensional vector spaces and linear isomorphisms between them, here $$V \mapsto V^*$$ can be made into an actual (covariant) endofunctor of this category as you can fix the contravariance by inverting the morphisms, so that an invertible arrow $$f:V \to W$$ induces $$(f^*)^{-1} : V^* \to W^*$$. And there you can concretely show that there is no natural isomorphism between this functor and the identity. Indeed, chosing an isomorphism $$V \simeq V^*$$ gives you a non-degenerate bilinear form on $$V$$ and you can always find an automorphism of $$V$$ that does not preserves this bilinear form, which is exactly what the naturality on isomorphisms would mean! So at the end of they day, one recovers exactly the argument that Paul Taylor or Chris Schommer-Pries made in the comments, but starting with a concrete categorical definition of "canonical". • I think my answer is canonically isomorphic to yours. :) Mar 9, 2021 at 19:35 • I think they are actually quite different. You are looking at automorphism of k-Vect, I'm not. (But I also like your answer, in fact I'd love to see a situation where both leads to different conclusion) Mar 9, 2021 at 19:36 • I think an automorphism of k-Vect (which preserves the dual space functor) can be constructed from any automorphism $\alpha$ of a single vector space $V$. Precompose anything from $V$ with $\alpha^-1$; postcompose anything to $V$ with $\alpha$; similarly with $V^*$ and $\alpha^*$,$(\alpha^*)^{-1}$. Mar 9, 2021 at 19:39 • Here is an example where I think the two differs: Let $P$ be the category of posets with all supremum and infimum and all order preserving maps between them. Now the sentence "each object $X$ of P admits a canonical morphism $1 \to X$". Is true in my definition of caninical as I can send the unique element of $1$ to the top elements and this is functorial on isomorphisms, but it is not true for you definition as $P$ admit an automorphisms that "reverse the order relation" on each object which preserve $1$ but do not preserve my "canonical" choice. Mar 9, 2021 at 19:44 • OK, sorry, I see your point. You set things up so that we do not have the data of which relation is $\ge$ and which is $\le$ for each poset. Mar 9, 2021 at 20:00 $$\newcommand\kVect{k\text{-Vect}}$$When people say ‘canonical’ what they mean in this context is something like ‘definable without parameters’ (i.e., without choosing bases; actually, without choosing anything at all). See for instance the entry for “Definable Set” in Wikipedia. The important point is that canonical objects are invariant under automorphisms that preserve the relevant structure. This means our isomorphism $$V\to V^*$$ should be invariant under all automorphisms of $$\kVect$$ that preserve the dualizing functor, considered as a contravariant functor from $$\kVect$$ to $$\kVect$$. This is not possible to obtain even for one finite-dimensional $$V$$. The reason is that with respect to some bases $$B$$, $$C$$ on $$V$$, $$V^*$$, a given isomorphism has the form of an identity matrix. But for any matrix $$A$$ representing an isomorphism $$V\to V$$, there is an automorphism of $$\kVect$$ which preserves the dual space functor and which transforms the representation of the given automorphism (with respect to the same bases $$B$$, $$C$$) to $$A^T A^{-1}$$. We can find a matrix $$A$$ for which this expression is different than $$I$$ (unless $$\dim(V)=1$$ or $$\lvert k\rvert=2=\dim(V)$$, as I learned here). Thus the isomorphism $$V \to V^*$$ is not preserved by this automorphism. • Of course "unless $\dim(V) = 1$ or $\lvert k\rvert = 2 = \dim(V)$" should be "unless $\dim(V) \le 1$ …". (You could also be cutesy and re-phrase as "$\dim(V) \le 1$ or $\lvert V\rvert = 4$".) Mar 9, 2021 at 22:14
2023-02-02 18:06:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 37, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9490383863449097, "perplexity": 230.83977761871222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500035.14/warc/CC-MAIN-20230202165041-20230202195041-00074.warc.gz"}
http://fxdiebold.blogspot.ca/
## Monday, September 26, 2016 ### Fascinating Conference at Chicago I just returned from the University of Chicago conference, "Machine Learning: What's in it for Economics?"  Lots of cool things percolating.  I'm teaching a Penn Ph.D. course later this fall on aspects of the ML/econometrics interface.  Feeling really charged. By the way, hadn't yet been to the new Chicago economics "cathedral" (Saieh Hall for Economics) and Becker-Friedman Institute.  Wow.  What an institution, both intellectually and physically. ## Tuesday, September 20, 2016 ### On "Shorter Papers" Journals should not corral shorter papers into sections like "Shorter Papers".  Doing so sends a subtle (actually unsubtle) message that shorter papers are basically second-class citizens, somehow less good, or less important, or less something -- not just less long -- than longer papers.  If a paper is above the bar, then it's above the bar, and regardless of its length it should then be published simply as a paper, not a "shorter paper", or a "note", or anything else.  There are myriad examples of "shorter papers" that are much more important than the vast majority of "longer papers". ## Monday, September 12, 2016 ### Time-Series Econometrics and Climate Change It's exciting to see time series econometrics contributing to the climate change discussion. Check out the upcoming CREATES conference, "Econometric Models of Climate Change", here. Here are a few good examples of recent time-series climate research, in chronological order.  (There are many more.  Look through the reference lists, for example, in the 2016 and 2017 papers below.) Jim Stock et al. (2009) in Climatic Change. Pierre Perron et al. (2013) in Nature. Peter Phillips et al. (2016) in Nature. Proietti and Hillebrand (2017), forthcoming in Journal of the Royal Statistical Society. ## Tuesday, September 6, 2016 ### Inane Journal "Impact Factors" Why are journals so obsessed with "impact factors"? (The five-year impact factor is average citations/article in a five-year window.)  They're often calculated to three decimal places, and publishers trumpet victory when they go from (say) 1.225 to 1.311!  It's hard to think of a dumber statistic, or dumber over-interpretation.  Are the numbers after the decimal point anything more than noise, and for that matter, are the numbers before the decimal much more than noise? Why don't journals instead use the same citation indexes used for individuals? The leading index seems to be the h-index, which is the largest integer h such that an individual has h papers, each cited at least h times. I don't know who cooked up the h-index, and surely it has issues too, but the gurus love it, and in my experience it tells the truth. Even better, why not stop obsessing over clearly-insufficient statistics of any kind? I propose instead looking at what I'll call a "citation signature plot" (CSP), simply plotting the number of cites for the most-cited paper, the number of cites for the second-most-cited paper, and so on. (Use whatever window(s) you want.) The CSP reveals everything, instantly and visually. How high is the CSP for the top papers? How quickly, and with what pattern, does it approach zero? etc., etc. It's all there. Google-Scholar CSP's are easy to make for individuals, and they're tremendously informative. They'd be only slightly harder to make for journals. I'd love to see some. ## Monday, August 29, 2016 ### On Credible Cointegration Analyses I may not know whether some $$I(1)$$ variables are cointegrated, but if they are, I often have a very strong view about the likely number and nature of cointegrating combinations. Single-factor structure is common in many areas of economics and finance, so if cointegration is present in an $$N$$-variable system, for example, a natural benchmark is 1 common trend ($$N-1$$ cointegrating combinations).  And moreover, the natural cointegrating combinations are almost always spreads or ratios (which of course are spreads in logs). For example, log consumption and log income may or may not be cointegrated, but if they are, then the obvious benchmark cointegrating combination is $$(ln C - ln Y)$$. Similarly, the obvious benchmark for $$N$$ government bond yields $$y$$ is $$N-1$$ cointegrating combinations, given by term spreads relative to some reference yield; e.g., $$y_2 - y_1$$, $$y_3 - y_1$$, ..., $$y_N - y_1$$. There's not much literature exploring this perspective. (One notable exception is Horvath and Watson, "Testing for Cointegration When Some of the Cointegrating Vectors are Prespecified", Econometric Theory, 11, 952-984.) We need more. ## Sunday, August 21, 2016 ### More on Big Data and Mixed Frequencies I recently blogged on Big Data and mixed-frequency data, arguing that Big Data (wide data, in particular) leads naturally to mixed-frequency data.  (See here for the tall data / wide data / dense data taxonomy.)  The obvious just occurred to me, namely that it's also true in the other direction. That is, mixed-frequency situations also lead naturally to Big Data, and with a subtle twist: the nature of the Big Data may be dense rather than wide. The theoretically-pure way to set things up is as a state-space system laid out at the highest observed frequency, appropriately treating most of the lower-frequency data as missing, as in ADS.  By construction, the system is dense if any of the series are dense, as the system is laid out at the highest frequency. ## Wednesday, August 17, 2016 ### On the Evils of Hodrick-Prescott Detrending [If you're reading this in email, remember to click through on the title to get the math to render.] Jim Hamilton has a very cool new paper, "Why You Should Never Use the Hodrick-Prescott (HP) Filter". Of course we've known of the pitfalls of HP ever since Cogley and Nason (1995) brought them into razor-sharp focus decades ago.  The title of the even-earlier Nelson and Kang (1981) classic, "Spurious Periodicity in Inappropriately Detrended Time Series", says it all.  Nelson-Kang made the spurious-periodicity case against polynomial detrending of I(1) series.  Hamilton makes the spurious-periodicity case against HP detrending of many types of series, including I(1).  (Or, more precisely, Hamilton adds even more weight to the Cogley-Nason spurious-periodicity case against HP.) But the main contribution of Hamilton's paper is constructive, not destructive.  It provides a superior detrending method, based only on a simple linear projection. Here's a way to understand what "Hamilton detrending" does and why it works, based on a nice connection to Beveridge-Nelson (1981) detrending not noticed in Hamilton's paper. First consider Beveridge-Nelson (BN) trend for I(1) series.  BN trend is just a very long-run forecast based on an infinite past.  [You want a very long-run forecast in the BN environment because the stationary cycle washes out from a very long-run forecast, leaving just the forecast of the underlying random-walk stochastic trend, which is also the current value of the trend since it's a random walk.  So the BN trend at any time is just a very long-run forecast made at that time.]  Hence BN trend is implicitly based on the projection: $$y_t ~ \rightarrow ~ c, ~ y_{t-h}, ~...,~ y_{t-h-p}$$, for $$h \rightarrow \infty$$ and $$p \rightarrow \infty$$. Now consider Hamilton trend.  It is explicitly based on the projection: $$y_t ~ \rightarrow ~ c, ~ y_{t-h}, ~...,~ y_{t-h-p}$$, for $$p = 3$$.  (Hamilton also uses a benchmark of  $$h = 8$$.) So BN and Hamilton are both "linear projection trends", differing only in choice of $$h$$ and $$p$$!  BN takes an infinite forecast horizon and projects on an infinite past.  Hamilton takes a medium forecast horizon and projects on just the recent past. Much of Hamilton's paper is devoted to defending the choice of $$p = 3$$, which turns out to perform well for a wide range of data-generating processes (not just I(1)).  The BN choice of $$h = p = \infty$$, in contrast, although optimal for I(1) series, is less robust to other DGP's.  (And of course estimation of the BN projection as written above is infeasible, which people avoid in practice by assuming low-ordered ARIMA structure.) ## Monday, August 15, 2016 ### More on Nonlinear Forecasting Over the Cycle Related to my last post, here's a new paper that just arrived from Rachidi Kotchoni and Dalibor Stevanovic, "Forecasting U.S. Recessions and Economic Activity". It's not non-parametric, but it is non-linear. As Dalibor put it, "The method is very simple: predict turning points and recession probabilities in the first step, and then augment a direct AR model with the forecasted probability." Kotchoni-Stevanovic and Guerron-Quintana-Zhong are usefully read together. ## Sunday, August 14, 2016 ### Nearest-Neighbor Forecasting in Times of Crisis Nonparametric K-nearest-neighbor forecasting remains natural and obvious and potentially very useful, as it has been since its inception long ago. [Most crudely: Find the K-history closest to the present K-history, see what followed it, and use that as a forecast. Slightly less crudely: Find the N K-histories closest to the present K-history, see what followed each of them, and take an average. There are many obvious additional refinements.] Overall, nearest-neighbor forecasting remains curiously under-utilized in dynamic econometrics. Maybe that will change. In an interesting recent development, for example, new Federal Reserve System research by Pablo Guerron-Quintana and Molin Zhong puts nearest-neighbor methods to good use for forecasting in times of crisis. ## Monday, August 8, 2016 ### NSF Grants vs. Improved Data Lots of people are talking about the Cowen-Tabarrok Journal of Economic Perspectives piece, "A Skeptical View of the National Science Foundation’s Role in Economic Research". See, for example, John Cochrane's insightful "A Look in the Mirror". A look in the mirror indeed. I was a 25-year ward of the NSF, but for the past several years I've been on the run. I bolted in part because the economics NSF reward-to-effort ratio has fallen dramatically for senior researchers, and in part because, conditional on the ongoing existence of NSF grants, I feel strongly that NSF money and "signaling" are better allocated to young assistant and associate professors, for whom the signaling value from NSF support is much higher. Cowen-Tabarrok make some very good points. But I can see both sides of many of their issues and sub-issues, so I'm not taking sides. Instead let me make just one observation (and I'm hardly the first). If NSF funds were to be re-allocated, improved data collection and dissemination looks attractive. I'm not talking about funding cute RCTs-of-the-month. Rather, I'm talking about funding increased and ongoing commitment to improving our fundamental price and quantity data (i.e., the national accounts and related statistics). They desperately need to be brought into the new millennium. Just look, for example, at the wealth of issues raised in recent decades by the. Ironically, it's hard to make a formal case (at least for data dissemination as opposed to creation), as Chris Sims has emphasized with typical brilliance. His "The Futility of Cost-Benefit Analysis for Data Dissemination" explains "why the apparently reasonable idea of applying cost-benefit analysis to government programs founders when applied to data dissemination programs." So who knows how I came to feel that NSF funds might usefully be re-allocated to data collection and dissemination. But so be it.
2016-09-27 20:36:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5351057648658752, "perplexity": 3912.44795077007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661213.28/warc/CC-MAIN-20160924173741-00159-ip-10-143-35-109.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/241692/how-to-set-up-authors-in-lncs-template
# How to set up authors in LNCS template? I would like to setup the authors and emails in a LNCS paper, but I dont find what is the correct way to do this. I tried this: \author{FirstAuthor LastName, Second Author LastName} The problem is that I tried different ways to set the emails and the institute but the document crashes out. Any idea of how to correctly set this parameters in orther to get something like this: (*) FirstAuthor LastName, Second Author LastName email@email.com email_2@email.com Institution Update I tried this: \author{FirstAuthor LastName \inst{1} \and SecondAuthor LastName \inst{2}} %If there are too many authors, use \authorrunning %\authorrunning{First Author et al.} \institute{ Insitute 1\\ \email{author1@email.com}\and Insitute 2\\ \email{author_2@email.com} } But I dont get (*) style. • May be \author{FirstAuthor LastName \and Second Author LastName} ? – Fran May 2 '15 at 3:43 • Thanks for the help @Fran, What about the institution and the emails?... I dont get how put them up. – skwoi May 2 '15 at 3:58 • \institule{ ... \email{ ....} \and ... \and ...}. Please read llncs.doc that is included in llncs2e.zip. – Fran May 2 '15 at 18:35 • Thanks for the feedback @Fran, Could you provide an example?. I all ready read the doc but still dont get how to do this. – skwoi May 2 '15 at 19:34 According to authors instructions for Lecture Notes in Computer Science, you must use \email{<email address>} within \institute{} and therefore you should not obtain the style that you are looking for(*). Compile llncs.doc (in spite of the extension, is really a LaTeX file) with pfdlatex to see the instructions for authors. This is a MWE extracted from llncs.dem (also a LaTeX file): \documentclass{llncs} \begin{document} \title{Hamiltonian Mechanics unter besonderer Ber\"ucksichtigung derh\"ohreren Lehranstalten} \author{% Ivar Ekeland\inst{1} \and Roger Temam\inst{2} Jeffrey Dean \and David Grove \and Craig Chambers \and Kim~B.~Bruce \and Elsa Bertino }% \institute{ Princeton University, Princeton NJ 08544, USA,\\ \email{I.Ekeland@princeton.edu},\\ \texttt{http://users/\homedir iekeland/web/welcome.html} \and Universit\'{e} de Paris-Sud, Laboratoire d'Analyse Num\'{e}rique, B\^{a}timent 425,\\ F-91405 Orsay Cedex, France} \maketitle \end{document} (*) If you want that style for personal use, then is better start with the standard class article: \documentclass{article} \date{} \def\email#1{\texttt{#1}} \begin{document} \title{Hamiltonian ...} \author{% Ivar Ekeland\\ \email{I.Ekeland@princeton.edu} \and Elsa Bertino\\ \email{E.Bertino@princeton.edu} }% \maketitle \end{document}
2020-12-03 01:46:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6811739206314087, "perplexity": 5670.7275724675255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141717601.66/warc/CC-MAIN-20201203000447-20201203030447-00041.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/what-happens-when-sodium-metal-dropped-water-some-important-compounds-sodium_11155
# What Happens When Sodium Metal is Dropped in Water ? - Chemistry What happens when sodium metal is dropped in water? #### Solution 1 When Na metal is dropped in water, it reacts violently to form sodium hydroxide and hydrogen gas. The chemical equation involved in the reaction is: 2Na_(s) + 2H_2O_(l) ->  2NaOH_((aq)) + H_(2(g)) #### Solution 2 2Na + 2H2O ——–> 2NaOH + H2 Is there an error in this question or solution? #### APPEARS IN NCERT Class 11 Chemistry Textbook Chapter 10 The s-Block Elements Q 25.1 | Page 306
2021-02-27 09:52:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38413283228874207, "perplexity": 11745.444486669034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358798.23/warc/CC-MAIN-20210227084805-20210227114805-00016.warc.gz"}
http://tex.stackexchange.com/questions/7091/using-listing-for-displaying-matlab-code
# Using listing for displaying Matlab code I'm trying to display matlab code in a LaTeX document and the comments are typeset with an ugly space between letters: \documentclass{article} \usepackage{listings} \lstset{language=Matlab} \lstset{tabsize=2} \begin{document} \begin{lstlisting} function gramschmidt(A) %The columns of A are the initial basis to the subspace \end{lstlisting} \end{document} results in Is there a way to improve the kerning (or whatever its called) of the text in the comments? - \lstset{flexiblecolumns=true}
2014-03-14 23:33:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9536908268928528, "perplexity": 1963.8641085179552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678695499/warc/CC-MAIN-20140313024455-00033-ip-10-183-142-35.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3366476/find-an-optimal-solution-formula-analytically-when-the-variables-are-within-natu
Find an optimal solution formula analytically when the variables are within natural logarithm If I have an optimization problem as follows: \label{eqn:for3b} \begin{aligned} (\mathbf{P}_1) \phantom{10} & \max_{\boldsymbol{x}} \phantom{5} \text{ln}\Bigg(1 + \sum_{i=1}^Ix_ia_i\Bigg) - \sum_{i=1}^Ix_ib_i. \end{aligned} $$\begin{eqnarray} \text{ s.t. } \quad \sum_{i=1}^I x_i a_i \leq N, \label{eqn:for3c} \\ 0 \leq x_i \leq 1, \forall i \in [1, I] \end{eqnarray}$$ how to find the optimal solution formula of $$x_i, \forall i \in [1,I]$$? As far as I understand, we need to obtain Lagrangian expression first: \label{eqn:for3a} \begin{aligned} L(x,\lambda,\sigma) &= \text{ln}\Bigg(1 + \sum_{i=1}^Ix_ia\Bigg) - \sum_{i=1}^Ix_ib + \lambda_1(N - \sum_{i=1}^I x_i a_i - \sigma_1) + \lambda_2(x_1 - \sigma_2) \\ &+ \lambda_3(1 - x_1 - \sigma_3) + ... + \lambda_{I+1}(x_I - \sigma_{I+1}) + \lambda_{I+2}(1 - x_I - \sigma_{I+2}). \end{aligned} However, since it has many $$x$$'s, how can I obtain the general optimal $$x^*_i, \forall i \in [1,I]$$ formula? I also think that I can use the relaxation solution of Knapsack problem. However, due to the variables are within the logarithmic formula, is there any possible way to solve it analytically? • Leaving aside that the objective function doesn't seem to be written out correctly, this is easy to formulate and numerically solve as a convex optimization problem in CVX, CVXPY, or similar tool. As for the objective function, there is a single $x_ib_i$ term by its lonesome in the objective function, which doesn't make sense because it depends on $i$. Is there supposed to be a a sum over $i$ of $x_ib_i$? If so, that presents no complicaton to CVX or CVXPY. Sep 23 '19 at 14:56 • Sorry I forget to add sum. I know that I can solve it using many tools, however, is it possible to explain it analytically? @MarkL.Stone Sep 24 '19 at 0:17 You will need to use the Karush-Kuhn-Tucker conditions. The linearity constraint qualification (LCQ) holds since all the constraints are linear. Expressing the problem vectorially is perhaps more helpful, which gives \begin{align*}\max_{\boldsymbol x}\quad&\ln(1+\boldsymbol a^\top \boldsymbol x)-\boldsymbol b^\top \boldsymbol x\\ \text{s.t.}\quad&\boldsymbol a^\top \boldsymbol x-N\leq 0&[\lambda]\\ &\boldsymbol x-\boldsymbol e\leq \boldsymbol 0&[\boldsymbol\mu]\\ &\boldsymbol -\boldsymbol x\leq \boldsymbol 0&[\boldsymbol\nu] \end{align*} KKT necessary conditions on an optimum $$\boldsymbol x^*$$ are \begin{align*} \frac{1}{1+\boldsymbol a^\top \boldsymbol x^*}\boldsymbol a-\boldsymbol b=\lambda\boldsymbol a+\boldsymbol\mu-\boldsymbol \nu&&\text{Stationarity}\\ \boldsymbol a^\top \boldsymbol x^*-N\leq 0&&\text{Primal Feasibility}\\ \boldsymbol x^*-\boldsymbol e\leq \boldsymbol 0&&\text{Primal Feasibility}\\ \boldsymbol -\boldsymbol x^*\leq \boldsymbol 0&&\text{Primal Feasibility}\\ \lambda\geq 0 &&\text{Dual Feasibility}\\ \boldsymbol\mu\geq \boldsymbol 0 &&\text{Dual Feasibility}\\ \boldsymbol\nu\geq \boldsymbol 0 &&\text{Dual Feasibility}\\ \lambda[\boldsymbol a^\top \boldsymbol x^*-N]=0&&\text{Complementarity}\\ \boldsymbol\mu^\top[\boldsymbol x^*-\boldsymbol e]=\boldsymbol 0&&\text{Complementarity}\\ \boldsymbol\nu^\top[-\boldsymbol x^*]=\boldsymbol 0&&\text{Complementarity} \end{align*} The stationarity conditions give a linear system of equations $$A\boldsymbol x^*=\boldsymbol d$$ where $$\boldsymbol d=(1-\lambda)\boldsymbol a -\boldsymbol \mu+\boldsymbol \nu-\boldsymbol b$$ and $$A=(\boldsymbol a-\boldsymbol d)\boldsymbol a ^\top$$. So If there is a solution, it would be at $$\boldsymbol x^*=A^{-1}\boldsymbol d$$. Then we just need to determine the dual variables $$\lambda$$, $$\boldsymbol \mu$$ and $$\boldsymbol \nu$$. Unfortunately, this is where things start to depend too much on the precise values of the constants $$\boldsymbol a$$, $$\boldsymbol b$$, and $$N$$. You will basically need to do case-checking on the complementary conditions, for each individual dual variable $$\mu_i$$, and $$\lambda_i$$, to choose which are zero, and which are non-zero. That would give $$2^{2I+1}$$ cases to check against the KKT sufficient condition, which is that $$\frac{\boldsymbol a\boldsymbol a^\top}{1+\boldsymbol a ^\top\boldsymbol x^*}$$ is positive semi-definite over the set of vectors $$\boldsymbol s$$ orthogonal to the active constraints, i.e. over the set $$\left\{\begin{bmatrix}s\\\hline\boldsymbol t\\\hline\boldsymbol u\end{bmatrix}: \begin{cases}s[\boldsymbol a^\top \boldsymbol x-N]=0&\text{if }\lambda>0\\ t_i[x_i-1]=0&\text{if }\mu_i>0,\quad\forall{i}\in[1,I] \\ u_i[-x_i]=0&\text{if }\nu_i>0,\quad\forall{i}\in[1,I]\end{cases}\right\}$$ Needless to say, if all these constants do not have fixed values, then obtaining a nice closed-form expression is elusive. The solvers automate this process, sometimes taking clever shortcuts. This kind of explicit work can be helpful for understanding the structure of the problem and the general "shape" of the solution, but not for the solution itself. The problem is in far too much of a general form for that (consider, by way of analogy, the knapsack problem, where the solution depends heavily on the costs and volumes involved). • Thanks a lot for the detailed explanation. It helps me a lot. Sep 24 '19 at 6:28
2021-09-27 23:15:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 25, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9756004810333252, "perplexity": 1073.6777614939297}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058552.54/warc/CC-MAIN-20210927211955-20210928001955-00017.warc.gz"}
https://learnshareit.com/how-to-solve-property-getcontext-does-not-exist-on-type-htmlelement-in-typescript/
# How To Solve “Property ‘getContext’ does not exist on type ‘HTMLElement'” in TypeScript We usually use getContext() method to manipulate a canvas element on html document, when using this method in TypeScript you may get the error “Property ‘getContext’ does not exist on type ‘HTMLElement'” , the cause of the error is that the HTMLElement object doesn’t have a getContext() method for us to use. This article will show you how to fix it using Type Assertion ## Cause of the error “Property ‘getContext’ does not exist on type ‘HTMLElement'” The cause of this error is that we access the getContext() method, which is not present in the HTMLElement object when you use the document.getElementById() method to get the canvas element on the DOM tree, it will encounter this error because this method will return the HTMLElement object. Error Example: // Use the document.getElementById() method to get the canvas element const myCanvas = document.getElementById("my-canvas"); // Use the getContext() method on the HTMLElement object const context = myCanvas.getContext("2d"); Error Output: Property 'getContext' does not exist on type 'HTMLElement'. In the above example, we are getting the my-canvas element on the DOM. We use the document.getElementById() method to get the element by referring to its id. The problem is for TypeScript, the return data type of this method is HTMLElement, then we use the getContext() method on the context object whose data type is HTMLElement, and so an error occurs because the HTMLElement object does not exist the getContext() method, which exists on the HTMLCanvasElement object ## Solution for the error Property ‘getContext’ does not exist on type ‘HTMLElement’ To solve this error, we need to let TypeScript understand that we are working on the HTMLCanvasElement object and not the HTMLElement object by using a technique called Type Assertion, this way we will convert the HTMLElement to HTMLCanvasElement. Example: // Use Type Assertion to convert to HTMLCanvasElement const myCanvas = document.getElementById( "my-canvas" ) as HTMLCanvasElement | null; // Now we can use getContext() method const context = myCanvas?.getContext("2d"); console.log(context); Output: CanvasRenderingContext2D {canvas: canvas#my-canvas, globalAlpha: 1, globalCompositeOperation: 'source-over', filter: 'none', imageSmoothingEnabled: true, …} In the above example, we use the keyword "as" followed by a union type consisting of HTMLCanvasElement and null type. The reason we add the null type is that maybe the html document does not have a my-canvas element and will raise an error, so we have to add null. Once converted to HTMLCanvasElement type, we can use the getContext() method on this object. ## Summary So we have solved the error “Property ‘getContext’ does not exist on type ‘HTMLElement'”, the cause of this error is that we use the getContext() method on an HTMLElement object which has no method. there. By using Type Assertion, the error was solved easily.
2023-03-28 22:14:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23624412715435028, "perplexity": 2569.21318097891}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948871.42/warc/CC-MAIN-20230328201715-20230328231715-00366.warc.gz"}
https://math.libretexts.org/TextMaps/Differential_Equations/Book%3A_Differential_Equations_for_Engineers_(Lebl)/4%3A_Fourier_series_and_PDEs/4.06%3A_PDEs%2C_separation_of_variables%2C_and_the_heat_equation
# 4.6: PDEs, separation of variables, and the heat equation [ "article:topic", "vettag:vet4", "targettag:lower", "authortag:lebl", "authorname:lebl", "showtoc:no" ] $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$ Let us recall that a partial differential equation or PDE is an equation containing the partial derivatives with respect to several independent variables. Solving PDEs will be our main application of Fourier series. A PDE is said to be linear if the dependent variable and its derivatives appear at most to the first power and in no functions. We will only talk about linear PDEs. Together with a PDE, we usually have specified some boundary conditions, where the value of the solution or its derivatives is specified along the boundary of a region, and/or someinitial conditions where the value of the solution or its derivatives is specified for some initial time. Sometimes such conditions are mixed together and we will refer to them simply as side conditions. We will study three specific partial differential equations, each one representing a more general class of equations. First, we will study the heat equation, which is an example of a parabolic PDE. Next, we will study thewave equation, which is an example of a hyperbolic PDE. Finally, we will study the Laplace equation, which is an example of an elliptic PDE. Each of our examples will illustrate behavior that is typical for the whole class. #### 4.6.1 Heat on an insulated wire Let us first study the heat equation. Suppose that we have a wire (or a thin metal rod) of length $$L$$ that is insulated except at the endpoints. Let $$x$$ denote the position along the wire and let $$t$$ denote time. See Figure 4.13. Figure 4.13: Insulated wire. Let $$u(x,t)$$ denote the temperature at point $$x$$ at time $$t$$. The equation governing this setup is the so-called one-dimensional heat equation: $\frac{\partial u}{\partial t} = k \frac{\partial^2 u}{\partial x^2},$ where $$k>0$$ is a constant (the thermal conductivity of the material). That is, the change in heat at a specific point is proportional to the second derivative of the heat along the wire. This makes sense; if at a fixed $$t$$ the graph of the heat distribution has a maximum (the graph is concave down), then heat flows away from the maximum. And vice-versa. We will generally use a more convenient notation for partial derivatives. We will write $$u_t$$ instead of $$\frac{\partial u}{\partial t}$$, and we will write $$u_{xx}$$ instead of $$\frac{\partial^2 u}{\partial x^2}$$. With this notation the heat equation becomes $u_t=ku_{xx}.$ For the heat equation, we must also have some boundary conditions. We assume that the ends of the wire are either exposed and touching some body of constant heat, or the ends are insulated. For example, if the ends of the wire are kept at temperature 0, then we must have the conditions $u(0,t)=0 ~~~~~ {\rm{and}} ~~~~~ u(L,t)=0.$ If, on the other hand, the ends are also insulated we get the conditions $u_x(0,t)=0 ~~~~~ {\rm{and}} ~~~~~ u_x(L,t)=0.$ In other words, heat is not flowing in nor out of the wire at the ends. We always have two conditions along the $$x$$ axis as there are two derivatives in the $$x$$ direction. These side conditions are called homogeneous (that is, $$u$$ or a derivative of $$u$$ is set to zero). Furthermore, suppose that we know the initial temperature distribution at time $$t=0$$. That is, $u(x,0)=f(x),$ for some known function $$f(x)$$. This initial condition is not a homogeneous side condition. #### 4.6.2 Separation of variables The heat equation is linear as $$u$$ and its derivatives do not appear to any powers or in any functions. Thus the principle of superposition still applies for the heat equation (without side conditions). If $$u_1$$ and $$u_2$$ are solutions and $$c_1,c_2$$ are constants, then $$u= c_1u_1+c_2u_2$$ is also a solution. Exercise $$\PageIndex{1}$$: Verify the principle of superposition for the heat equation Superposition also preserves some of the side conditions. In particular, if $$u_1$$ and $$u_2$$ are solutions that satisfy $$u(0,t)=0$$ and $$u_(L,t)=0$$, and $$c_1,c_2$$ are constants, then $$u= c_1u_1+c_2u_2$$ is still a solution that satisfies $$u(0,t)=0$$ and$$u_(L,t)=0$$. Similarly for the side conditions $$u_x(0,t)=0$$ and $$u_x(L,t)=0$$. In general, superposition preserves all homogeneous side conditions. The method of separation of variables is to try to find solutions that are sums or products of functions of one variable. For example, for the heat equation, we try to find solutions of the form $u(x,t)=X(x)T(t).$ That the desired solution we are looking for is of this form is too much to hope for. What is perfectly reasonable to ask, however, is to find enough “building-block” solutions of the form $$u(x,t)=X(x)T(t)$$ using this procedure so that the desired solution to the PDE is somehow constructed from these building blocks by the use of superposition. Let us try to solve the heat equation $u_t=ku_{xx} ~~~~~ {\rm{with}} ~~~ u(0,t)=0, ~~~~~ u(L,t)=0, ~~~~~ {\rm{and}} ~~~ u(x,0)=f(x).$ Let us guess $$u(x,t)=X(x)T(t)$$. We plug into the heat equation to obtain $X(x)T'(t)=kX''(x)T(t).$ We rewrite as $\frac{T'(t)}{kT(t)}= \frac{X''(x)}{X(x)}.$ This equation must hold for all $$x$$ and all $$t$$. But the left hand side does not depend on $$x$$ and the right hand side does not depend on $$t$$. Hence, each side must be a constant. Let us call this constant  $$- \lambda$$ (the minus sign is for convenience later). We obtain the two equations $\frac{T'(t)}{kT(t)}= - \lambda = \frac{X''(x)}{X(x)}.$ In other words $X''(x) + \lambda X(x)=0, \\ T'(t) + \lambda k T(t)=0.$ The boundary condition $$u(0,t)=0$$ implies $$X(0)T(t)=0$$. We are looking for a nontrivial solution and so we can assume that $$T(t)$$ is not identically zero. Hence $$X(0)=0$$. Similarly, $$u(L,t)=0$$ implies $$X(L)=0$$. We are looking for nontrivial solutions $$X$$ of the eigenvalue problem $$X'' + \lambda X = 0, X(0)=0, X(L)=0$$. We have previously found that the only eigenvalues are $$\lambda_n = \frac{n^2 \pi^2}{L^2}$$, for integers $$n \geq 1$$, where eigenfunctions are $$\sin \left( \frac{n \pi}{L}x \right)$$. Hence, let us pick the solutions $X_n(x)= \sin \left( \frac{n \pi}{L}x \right).$ The corresponding $$T_n$$ must satisfy the equation $T'_n(t) + \frac{n^2 \pi^2}{L^2}kT_n(t)=0.$ By the method of integrating factor, the solution of this problem is $T_n(t)=e^{\frac{-n^2 \pi^2}{L^2}kt}.$ It will be useful to note that $$T_n(0)=1$$. Our building-block solutions are $u_n(x,t)=X_n(x)T_n(t)= \sin \left( \frac{n \pi}{L}x \right) e^{\frac{-n^2 \pi^2}{L^2}kt}.$ We note that $$u_n(x,0)= \sin \left( \frac{n \pi}{L}x \right)$$. Let us write $$f(x)$$ as the sine series $f(x)= \sum_{n=1}^{\infty} b_n \sin \left( \frac{n \pi}{L}x \right).$ That is, we find the Fourier series of the odd periodic extension of $$f(x)$$. We used the sine series as it corresponds to the eigenvalue problem for $$X(x)$$ above. Finally, we use superposition to write the solution as $u(x,t)= \sum^{\infty}_{n=1}b_n u_n (x,t)= \sum^{\infty}_{n=1}b_n \sin(\frac{n \pi}{L}x)e^{\frac{-n^2 \pi^2}{L^2}kt}.$ Why does this solution work? First note that it is a solution to the heat equation by superposition. It satisfies $$u(0,t)=0$$ and $$u(L,t)=0$$ , because $$x=0$$ or $$x=L$$ makes all the sines vanish. Finally, plugging in $$t=0$$, we notice that $$T_n(0)=1$$ and so $u(x,0)= \sum^{\infty}_{n=1}b_n u_n (x,0)= \sum^{\infty}_{n=1}b_n \sin(\frac{n \pi}{L}x)=f(x).$ Example $$\PageIndex{1}$$: Suppose that we have an insulated wire of length $$1$$, such that the ends of the wire are embedded in ice (temperature 0). Let $$k=0.003$$. Then suppose that initial heat distribution is $$u(x,0)=50x(1-x)$$. See Figure 4.14. Figure 4.14: Initial distribution of temperature in the wire. We want to find the temperature function $$u(x,t)$$. Let us suppose we also want to find when (at what ) does the maximum temperature in the wire drop to one half of the initial maximum of $$12.5$$. We are solving the following PDE problem: $u_t=0.003u_{xx}, \\ u(0,t)= u(1,t)=0, \\ u(x,0)= 50x(1-x) ~~~~ {\rm{for~}} 0<x<1.$ We write $$f(x)=50x(1-x)$$ for $$0<x<1$$ as a sine series. That is, $$f(x)= \sum^{\infty}_{n=1}b_n \sin(n \pi x)$$, where $b_n= 2 \int^1_0 50x(1-x) \sin(n \pi x)dx = \frac{200}{\pi^3 n^3}-\frac{200(-1)^n}{\pi^3 n^3}= \left\{ \begin{array}{cc} 0 & {\rm{if~}} n {\rm{~even,}} \\ \frac{400}{\pi^3 n^3} & {\rm{if~}} n {\rm{~odd.}} \end{array} \right.$ Figure 4.15: Plot of the temperature of the wire at position  at time . The solution $$u(x,t)$$, plotted in Figure 4.15 for $$0 \leq t \leq 100$$, is given by the series: $u(x,t)= \sum^{\infty}_{\underset{n~ {\rm{odd}} }{n=1}} \frac{400}{\pi^3 n^3} \sin(n \pi x) e^{-n^2 \pi^2 0.003t}.$ Finally, let us answer the question about the maximum temperature. It is relatively easy to see that the maximum temperature will always be at $$x=0.5$$, in the middle of the wire. The plot of $$u(x,t)$$ confirms this intuition. If we plug in $$x=0.5$$ we get $u(0.5,t)= \sum^{\infty}_{\underset{n~ {\rm{odd}} }{n=1}} \frac{400}{\pi^3 n^3} \sin(n \pi 0.5) e^{-n^2 \pi^2 0.003t}.$ For $$n=3$$ and higher (remember $$n$$ is only odd), the terms of the series are insignificant compared to the first term. The first term in the series is already a very good approximation of the function. Hence $u(0.5,t) \approx \frac{400}{\pi^3}e^{-\pi^2 0.003t}.$ The approximation gets better and better as $$t$$ gets larger as the other terms decay much faster. Let us plot the function $$0.5,t$$, the temperature at the midpoint of the wire at time $$t$$, in Figure 4.16. The figure also plots the approximation by the first term. Figure 4.16: Temperature at the midpoint of the wire (the bottom curve), and the approximation of this temperature by using only the first term in the series (top curve). After $$t=5$$ or so it would be hard to tell the difference between the first term of the series for $$u(x,t)$$ and the real solution $$u(x,t)$$. This behavior is a general feature of solving the heat equation. If you are interested in behavior for large enough $$t$$, only the first one or two terms may be necessary. Let us get back to the question of when is the maximum temperature one half of the initial maximum temperature. That is, when is the temperature at the midpoint $$12.5/2=6.25$$. We notice on the graph that if we use the approximation by the first term we will be close enough. We solve $6.25=\frac{400}{\pi^3}e^{-\pi^2 0.003t}.$ That is, $t=\frac{\ln{\frac{6.25 \pi^3}{400}}}{-\pi^2 0.003} \approx 24.5.$ So the maximum temperature drops to half at about $$t=24.5$$. We mention an interesting behavior of the solution to the heat equation. The heat equation “smoothes” out the function $$f(x)$$ as $$t$$ grows. For a fixed $$t$$, the solution is a Fourier series with coefficients $$b_n e^{\frac{-n^2 \pi^2}{L^2}kt}$$. If $$t>0$$, then these coefficients go to zero faster than any $$\frac{1}{n^P}$$ for any power $$p$$. In other words, the Fourier series has infinitely many derivatives everywhere. Thus even if the function $$f(x)$$ has jumps and corners, then for a fixed $$t>0$$, the solution $$u(x,t)$$ as a function of $$x$$ is as smooth as we want it to be. #### 4.6.3 Insulated ends Now suppose the ends of the wire are insulated. In this case, we are solving the equation $u_t=ku_{xx}~~~~ {\rm{with}}~~~u_x(0,t)=0,~~~u_x(L,t)=0,~~~{\rm{and}}~~~u(x,0)=f(x).$ Yet again we try a solution of the form $$u(x,t)=X(x)T(t)$$. By the same procedure as before we plug into the heat equation and arrive at the following two equations $X''(x)+\lambda X(x)=0, \\ T'(t)+\lambda kT(t)=0.$ At this point the story changes slightly. The boundary condition $$u_x(0,t)=0$$ implies $$X'(0)T(t)=0$$. Hence $$X'(0)=0$$. Similarly, $$u_x(L,t)=0$$ implies $$X'(L)=0$$. We are looking for nontrivial solutions $$X$$ of the eigenvalue problem $$X''+ \lambda X=0,$$ $$X'(0)=0,$$ $$X'(L)=0,$$. We have previously found that the only eigenvalues are $$\lambda_n=\frac{n^2 \pi^2}{L^2}$$, for integers $$n \geq 0$$, where eigenfunctions are $$\cos(\frac{n \pi}{L})X$$ (we include the constant eigenfunction). Hence, let us pick solutions $X_n(x)= \cos(\frac{n \pi}{L}x)~~~~ {\rm{and}}~~~~ X_0(x)=1.$ The corresponding $$T_n$$ must satisfy the equation $T'_n(t)+ \frac{n^2 \pi^2}{L^2}kT_n(t)=0.$ For $$n \geq 1$$, as before, $T_n(t)= e^{\frac{-n^2 \pi^2}{L^2}kt}.$ For $$n=0$$, we have $$T'_0(t)=0$$ and hence $$T_0(t)=1$$. Our building-block solutions will be $u_n(x,t)=X_n(x)T_n(t)= \cos \left( \frac{n \pi}{L} x \right) e^{\frac{-n^2 \pi^2}{L^2}kt},$ and $u_0(x,t)=1.$ We note that $$u_n(x,0) =\cos \left( \frac{n \pi}{L} x \right)$$. Let us write $$f$$ using the cosine series $f(x)= \frac{a_0}{2} + \sum^{\infty}_{n=1} a_n \cos \left( \frac{n \pi}{L} x \right).$ That is, we find the Fourier series of the even periodic extension of $$f(x)$$. We use superposition to write the solution as $u(x,t)= \frac{a_0}{2} + \sum^{\infty}_{n=1} a_n u_n(x,t)= \frac{a_0}{2} + \sum^{\infty}_{n=1} a_n \cos \left( \frac{n \pi}{L} x \right) e^{\frac{-n^2 \pi^2}{L^2}kt}.$ Example $$\PageIndex{2}$$: Let us try the same equation as before, but for insulated ends. We are solving the following PDE problem $u_t=0.003u_{xx}, \\ u_x(0,t)= u_x(1,t)=0, \\ u(x,0)= 50x(1-x) ~~~~ {\rm{for~}} 0<x<1.$ For this problem, we must find the cosine series of $$u(x,0)$$. For $$0<x<1$$ we have $50x(1-x)=\frac{25}{3}+\sum^{\infty}_{\underset{n~ {\rm{even}} }{n=2}} \left( \frac{-200}{\pi^2 n^2} \right) \cos(n \pi x).$ The calculation is left to the reader. Hence, the solution to the PDE problem, plotted in Figure 4.17, is given by the series $u(x,t)=\frac{25}{3}+\sum^{\infty}_{\underset{n~ {\rm{even}} }{n=2}} \left( \frac{-200}{\pi^2 n^2} \right) \cos(n \pi x) e^{-n^2 \pi^2 0.003t}.$ Figure 4.17: Plot of the temperature of the insulated wire at position $$x$$ at time $$t$$. Note in the graph that the temperature evens out across the wire. Eventually, all the terms except the constant die out, and you will be left with a uniform temperature of $$\frac{25}{3} \approx{8.33}$$ along the entire length of the wire.
2018-12-10 07:07:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9585258960723877, "perplexity": 262.4821120585427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823318.33/warc/CC-MAIN-20181210055518-20181210081018-00072.warc.gz"}
https://www.neetprep.com/question/71946-Assertion-When-coil-joined-cell-current-cell-will-less-itssteady-state-value-time-tloge-time-constant-thecircuitReason-When-coil-joined-cell-growth-current-given-byiietShow-Options--assertion-reason-true-reason----correct-explanation-assertion--assertion-reason-true-reason-not----correct-explanation-assertion--assertion-true-reason-false--assertion-reason-false/55-Physics--Electromagnetic-Induction/696-Electromagnetic-Induction
Assertion : When a coil is joined to a cell, the current through the cell will be 10% less than its steady state value at time , where $\mathrm{\tau }$ is the time constant of the circuit. Reason : When a coil is joined to a cell, the growth of current is given by $\mathrm{i}={\mathrm{i}}_{0}{\mathrm{e}}^{-\mathrm{t}/\mathrm{\tau }}$ 1. If both the assertion and the reason are true and the reason is a correct explanation of the assertion 2. If both the assertion and reason are true but the reason is not a correct explanation of the assertion 3. If the assertion is true but the reason is false 4. If both the assertion and reason are false Concept Questions :- LR circuit High Yielding Test Series + Question Bank - NEET 2020 Difficulty Level: Assertion: The magnetic flux linked with a coil is $\mathrm{\varphi }$, and the emf induced in it is $\mathrm{\epsilon }$. If $\mathrm{\varphi }=0$, $\mathrm{\epsilon }$ must be zero. Reason : (a) Assertion and Reason both are correct and the Reason is the correct explanation of the Assertion. (b) Assertion and Reason both are correct but the Reason is not the correct explanation of the Assertion. (c) The assertion is correct but the Reason is wrong. (d) Assertion and Reason both are wrong. 1. If both the assertion and the reason are true and the reason is a correct explanation of the assertion 2. If both the assertion and reason are true but the reason is not a correct explanation of the assertion 3. If the assertion is true but the reason is false 4. If both the assertion and reason are false Concept Questions :- Faraday 's law and lenz law High Yielding Test Series + Question Bank - NEET 2020 Difficulty Level: Assertion : The coil in the resistance boxes are made by doubling the wire. Reason : Thick wire is required in resistance box. 1. If both the assertion and the reason are true and the reason is a correct explanation of the assertion 2. If both the assertion and reason are true but the reason is not a correct explanation of the assertion 3. If the assertion is true but the reason is false 4. If both the assertion and reason are false Concept Questions :- Self-inductance High Yielding Test Series + Question Bank - NEET 2020 Difficulty Level: A coil of area  and of 50 turns is kept with its plane normal to a magnetic field B. A resistance of 30 ohm is connected to the resistance-less coil. B is 75 exp (– 200t) gauss. The current passing through the resistance at t = 5 ms will be- 1. 2. 3. 4. Concept Questions :- Faraday 's law and lenz law High Yielding Test Series + Question Bank - NEET 2020 Difficulty Level: A long solenoid having 1000 turns per cm is carrying alternating current of one ampere peak value. A search coil of area of cross-section  and of 20 turns is placed in the middle of the solenoid so that its plane is perpendicular to the axis of the solenoid. The search coil registers a peak voltage . The frequency of the current in the solenoid is - 1.  1.6 per second 2.  0.16 per second 3.  15.9 per second 4.  15.85 per second Concept Questions :- Faraday 's law and lenz law High Yielding Test Series + Question Bank - NEET 2020 Difficulty Level: Rate of increment of energy in an inductor with time in series LR circuit getting charge with battery of emf E is best represented by – [inductor has initially zero current] 1. 2. 3. 4. Concept Questions :- LR circuit High Yielding Test Series + Question Bank - NEET 2020 Difficulty Level: A copper rod of length 0.19 m is moving parallel to a long wire with a uniform velocity of 10 m/s. The long wire carries 5 ampere current and is perpendicular to the rod. The ends of the rod are at distances 0.01 m and 0.2 m from the wire. The emf induced in the rod will be- 1. 2. 3. 4. Concept Questions :- Motional emf High Yielding Test Series + Question Bank - NEET 2020 Difficulty Level: A conducting square loop of side L and resistance R moves in its plane with a uniform velocity v perpendicular to one of its sides. A magnetic induction B, constant in time and space, pointing perpendicular and into the plane of the loop exists everywhere, see figure. The current induced in the loop is 1.  BLv/R clockwise 2.  BLv/R anticlockwise 3.  2 BLv/R anticlockwise 4.  zero Concept Questions :- Motional emf High Yielding Test Series + Question Bank - NEET 2020 Difficulty Level: A coil of radius 1 cm and  turns 100 is placed in the middle of a long solenoid of radius 5 cm and having 8 turns/cm. The mutual induction in millihenry will be- 1.  0.0316 2.  0.063 3.  0.105 4.  Zero Concept Questions :- Mutual-inductance High Yielding Test Series + Question Bank - NEET 2020 Difficulty Level: A circular coil of radius 5 cm has 500 turns of a wire. The approximate value of the coefficient of self induction of the coil will be - 1. 2. 3. 4. Concept Questions :- Self-inductance
2020-05-25 14:19:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7361057996749878, "perplexity": 1448.779424238816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347388758.12/warc/CC-MAIN-20200525130036-20200525160036-00150.warc.gz"}
https://www.geteasysolution.com/96.66666666666666_as_a_fraction
# 96.66666666666666 as a fraction ## 96.66666666666666 as a fraction - solution and the full explanation with calculations. Below you can find the full step by step solution for you problem. We hope it will be very helpful for you and it will help you to understand the solving process. If it's not what You are looking for, type in into the box below your number and see the solution. ## What is 96.66666666666666 as a fraction? To write 96.66666666666666 as a fraction you have to write 96.66666666666666 as numerator and put 1 as the denominator. Now you multiply numerator and denominator by 10 as long as you get in numerator the whole number. 96.66666666666666 = 96.66666666666666/1 = 966.66666666667/10 = 9666.6666666667/100 = 96666.666666667/1000 = 966666.66666667/10000 = 9666666.6666667/100000 = 96666666.666667/1000000 = 966666666.66667/10000000 = 9666666666.6667/100000000 = 96666666666.667/1000000000 = 966666666666.67/10000000000 = 9666666666666.7/100000000000 = 96666666666667/1000000000000 And finally we have: 96.66666666666666 as a fraction equals 96666666666667/1000000000000
2023-02-03 14:44:17
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8327763080596924, "perplexity": 1359.6680167023692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500056.55/warc/CC-MAIN-20230203122526-20230203152526-00317.warc.gz"}
https://annakrystalli.me/rrresearchACCE20/
# Course description In order to ensure robustness of outputs and maximise the benefits of ACCE research to future researchers and society more generally, it is important to share the underlying code and data. But for sharing to have any impact, such materials need to be created FAIR (findable, accessible, interoperable, reusable), i.e. they must be adequately described, archived, and made discoverable to an appropriate standard. Additionally, if analyses are to be deemed robust, they must be at the very least reproducible, but ideally well documented and reviewable. R and Rstudio tools and conventions offer a powerful framework for making modern, open, reproducible and collaborative computational workflows more accessible to researchers. This course focuses on data and project management through R and Rstudio, will introduce students to best practice and equip them with modern tools and techniques for managing data and computational workflows to their full potential. The course is designed to be relevant to students with a wide range of backgrounds, working with anything from relatively small sets of data collected from field or experimental observations, to those taking a more computational approach and bigger datasets. ## Learning Outcomes By the end of the workshop, participants will be able to: • Understand the basics of good research data management and be able to produce clean datasets with appropriate metadata. • Manage computational projects for reproducibility, reuse and collaboration. • Use version control to track the evolution of research projects. • Use R tools and conventions to document code and analyses and produce reproducible reports. • Be able to publish, share materials and collaborate through the web. • Understand why this all matters! ## Course Outline #### Sources of Materials The first few chapters of the Basics section were heavily sourced and adapted from “Software Carpentry: R for Reproducible Scientific Analysis.” Thomas Wright and Naupaka Zimmerman (eds): Version 2016.06, June 2016 https://github.com/swcarpentry/r-novice-gapminder, . The Good File Naming chapter was heavily sourced from “File organization for reproducible research.” Data Carpentry Reproducible Research Committee. 2016. Small sections in the Data Munging section where inspired by text in the online version of “R 4 Data Science”, Garrett Grolemund & Hadley Wickham. Images contained throughout the materials and watermarked with Scriberia were sourced from “Illustrations from the Turing Way book dashes”, . Images were created by Scriberia for The Turing Way community. Data for the the main practical parts of the course were sourced from the NEON Data Portal, provided by the National Ecological Observatory Network. 2019 Provisional data downloaded from http://data.neonscience.org on 2019-08-06. Battelle, Boulder, CO, USA. • Data Products: NEON.DOM.SITE.DP1.10098.001 • Name: Woody plant vegetation structure • Description: Structure measurements, including height, canopy diameter, and stem diameter, as well as mapped position of individual woody plants • Query information: • Start Date-Time for Queried Data: 2018-08-15 16:00 (UTC) • End Date-Time for Queried Data: 2018-08-29 16:00 (UTC) • Domains: D01:D9 THE NEON DATA PRODUCTS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE NEON DATA PRODUCTS BE LIABLE FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE NEON DATA PRODUCTS.
2022-06-30 01:02:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17480254173278809, "perplexity": 5858.701067721455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103646990.40/warc/CC-MAIN-20220630001553-20220630031553-00766.warc.gz"}
https://leanprover-community.github.io/archive/stream/217875-Is-there-code-for-X%3F/topic/fdvs.20lemmas.html
## Stream: Is there code for X? ### Topic: fdvs lemmas #### Kevin Buzzard (Jun 22 2020 at 19:57): Can we for example proof that if V is a 9-dimensional vector space, and X and Y are two 5-dimensional subspaces, then X \cap Y is non-zero? I'm happy to do the work proving the lemma but I don't know where to start. Do we have dim(X+Y)=dim(X)+dim(Y)-dim(X \cap Y)? Is it some statement about cardinals instead of naturals? #### Kevin Buzzard (Jun 22 2020 at 20:01): Oh great, search works :-) I am now reading linear_algebra.finite_dimensional #### Kevin Buzzard (Jun 22 2020 at 21:09): meh, the thing I want is only proved for arbitrary vector spaces as far as I can see. The API is good enough to prove the finite-dimensional case; maybe I should add it though. #### Jalex Stark (Jun 22 2020 at 21:33): from reading the docs in that file, it seems like "port more stuff from cardinal dimension" is a goal, so probably your lemma should be PRed #### Kevin Buzzard (Jun 22 2020 at 21:43): A mathematics lecturer with no Lean background was asking me about this lemma -- they wanted to know if this sort of thing was possible in Lean. Here's the proof I knocked up for them : -- boilerplate -- ignore for now import tactic import linear_algebra.finite_dimensional open vector_space open finite_dimensional open submodule -- Let's prove that if V is a 9-dimensional vector space, and X and Y are 5-dimensional subspaces -- then X ∩ Y is non-zero theorem five_inter_five_in_nine_is_nonzero -- let K be a field (K : Type) [field K] -- let V be a finite-dimensional vector space over K (V : Type) [add_comm_group V] [vector_space K V] (hVfin : finite_dimensional K V) -- and let's assume V is 9-dimensional -- (note that dim will return a cardinal! findim returns a natural number) (hV : findim K V = 9) -- Let X and Y be subspaces of V (X Y : subspace K V) -- and let's assume they're both 5-dimensional (hX : findim K X = 5) (hY : findim K Y = 5) -- then X ∩ Y isn't 0 : X ⊓ Y ≠ ⊥ -- Proof := begin -- I will give a proof which uses *the current state of mathlib*. -- Note that mathlib can be changed, and other lemmas can be proved, -- and notation can be created, which will make this proof much more -- The key lemma from the library we'll need is that dim(X + Y) + dim(X ∩ Y) = dim(X)+dim(Y) have key : dim K ↥(X ⊔ Y) + dim K ↥(X ⊓ Y) = dim K X + dim K Y := dim_sup_add_dim_inf_eq X Y, -- Unfortunately this is only proved in the library, right now, for arbitrary vector -- spaces (i.e no finite dimension assumptions). This can be fixed, by adding the -- finite-dimensional version. The point is that there's an invisible map from natural -- numbers to cardinals, but it is a map; key is currently a statement about cardinals. -- Because the lemma is not proved for finite-dimensional spaces, I have to -- deduce it from the cardinal version repeat {rw ←findim_eq_dim at key}, norm_cast at key, -- key is now a statement about natural numbers. It says dim(X+Y)+dim(X∩Y)=dim(X)+dim(Y) -- Now let's substitute in the hypothesis that dim(X)=dim(Y)=5 rw hX at key, rw hY at key, -- so now we know dim(X+Y)+dim(X∩Y)=10. -- Let's now turn to the proof of the theorem. -- Let's prove it by contradiction. Assume X∩Y is 0 intro hXY, -- then we know dim(X+Y) + dim(0) = 10 rw hXY at key, -- and dim(0) = 0 rw findim_bot at key, -- so the dimension of X+Y is 10 norm_num at key, -- But the dimension of a subspace is at most the dimension of a space have key2 : findim K ↥(X ⊔ Y) ≤ findim K V := findim_le (X ⊔ Y), -- and now we can get our contradiction by just doing linear arithmetic linarith, end #### Kevin Buzzard (Jun 22 2020 at 21:47): I don't have time to PR this tonight, but this is an attempt to show a mathematician interested in using Lean for teaching how to a question on his problem sheet, so the easier on the eye we can make it the better. #### Yakov Pechersky (Jun 22 2020 at 21:54): Why do you set the hVfin : finite_dimensional K V as opposed to [finite_dimensional K V]? #### Yakov Pechersky (Jun 22 2020 at 21:55): It's a bit confusing, because your other hypotheses h* are equalities, and X Y are terms of a type, but hVfin is a hypothesis but not an equality. #### Chris Hughes (Jun 22 2020 at 22:13): Is it worth renaming findim to dim and using infindim for infinite dimension. How often does reasoning about infinite dimension come up? My guess is not that much #### Kevin Buzzard (Jun 22 2020 at 23:10): @Yakov Pechersky I've never used that stuff before -- if it was a class I missed it, it's just an error on my part. Thanks! #### Kevin Buzzard (Jun 22 2020 at 23:11): I think this stuff about dimensions was written by Mario when he was under the impression that ordinals and cardinals were actually used by mathematicians #### Kevin Buzzard (Jun 22 2020 at 23:14): I think there's also an argument for card being the finite one (sending infinite sets/types to zero) and infincard being the cardinal one. I might be biased here, I never see cardinals in number theory, perhaps they show up in other courses. It's perhaps worth noting that cardinals are defined in the Imperial curriculum in an optional 3rd year logic course which is not a prerequisite for any other course #### Scott Morrison (Jun 23 2020 at 00:03): I like this proposal to give the cardinal valued versions the clumsier names! #### Mario Carneiro (Jun 23 2020 at 00:34): that's fine with me. In the original version there was no findim at all, the idea was to use dim and coercion #### Mario Carneiro (Jun 23 2020 at 00:34): I believe the case of countably infinite dimension comes up reasonably often #### Mario Carneiro (Jun 23 2020 at 00:35): e.g. Hilbert spaces #### Mario Carneiro (Jun 23 2020 at 00:36): Uncountably infinite dimension is mostly only useful for choicy constructions like Hamel bases #### Jalex Stark (Jun 23 2020 at 00:36): i think you don't use dimension-based arguments very much in that context, precisely because cardinal arithmetic isn't strong enough to help you #### Mario Carneiro (Jun 23 2020 at 00:36): No, but you need to be able to say "this has dimension omega" for sure #### Mario Carneiro (Jun 23 2020 at 00:37): we ended up defining "this has dimension < omega" as a typeclass because it makes sense #### Mario Carneiro (Jun 23 2020 at 00:38): But I think in more general circumstances it is more useful to be able to reason about free modules than bases directly #### Jalex Stark (Jun 23 2020 at 00:38): I'm arguing only in favor of "findim gets used much more than dim" (and therefore deserves the shorter names) Last updated: May 17 2021 at 14:12 UTC
2021-05-17 15:23:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7674198150634766, "perplexity": 2588.7395129090164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991258.68/warc/CC-MAIN-20210517150020-20210517180020-00611.warc.gz"}
https://techwhiff.com/learn/5-1-points-details-salgtrig4-74041-my-notes-ask/216928
# 5. (-/1 Points) DETAILS SALGTRIG4 7.4.041. MY NOTES ASK YOUR TEACHER PRACTICE ANOTHER Solve the given... ###### Question: 5. (-/1 Points) DETAILS SALGTRIG4 7.4.041. MY NOTES ASK YOUR TEACHER PRACTICE ANOTHER Solve the given equation. (Enter your answers as a comma-separated list. Let k be any integer. Round terms to three decimal places where appropriate. If there is no solution, enter NO SOLUTION.) 4 sin?(0) - 4 sin(0) + 1 = 0 0 = Need Help? Watch It Talk to a Tutor #### Similar Solved Questions ##### Review Part B Solutions of sodium carbonate and silver nitrate react to form solid silver carbonate... Review Part B Solutions of sodium carbonate and silver nitrate react to form solid silver carbonate and a solution of sodium nitrate. A solution containing 3.30 g of sodium carbonate is mixed with one containing 4.71 of silver nitrate How many grams of silver nitrate are present after the reaction i... ##### An exercise room has six weight-lifting machines that have no motors and seven treadmills, each equipped... An exercise room has six weight-lifting machines that have no motors and seven treadmills, each equipped with a 2.5-hp (shaft output) motor. The motors operate at an average load factor of 0.7, at which their efficiency is 0.77. During peak evening hours, all 13 pieces of exercising equipment are us... ##### The statements in the following list all refer to thermodynamics and entropy. Check the boxes of... The statements in the following list all refer to thermodynamics and entropy. Check the boxes of the THREE CORRECT statements. 1. Entropy is similar to energy and obeys a conservation law. 2. In an adiabatic process the change in the internal energy of the gas is the same as the work done on the gas... ##### Answer the following questions for the circuit drawn below. Answer the following questions for the circuit... Answer the following questions for the circuit drawn below. Answer the following questions for the circuit drawn below. 62R 19 20 volts 56R j82R i. What is the current supplied by the voltage source? mA degrees ii. What is the voltage across the capacitor? degrees ii. What is the voltage across the... ##### Choose the label Which best describes the following! A Solution containing Zn2+ H Electrode at which... Choose the label Which best describes the following! A Solution containing Zn2+ H Electrode at which oxidation of Zn metal occurs The overall, balanced, spontaneous reaction occurring in this (standard) cell is: o Zn2+(aq) + 2Ag(s) → 2Ag+(aq) + Zn(s) O 2Ag+(aq) + Zn(s) → Zn2+(aq) + 2Ag(s) ... ##### As it is shown in the figure above, there's a system of three chambers, each of which has the vol... As it is shown in the figure above, there's a system of three chambers, each of which has the volume v. Initially there's only A particles in chamber l; B particles in chamber II, and C while wall between chamber ll and Ill is only permeable to particle C. All thhree types of particles can b... ##### We have to synthesize this product by only using the given starting materials as the sources... we have to synthesize this product by only using the given starting materials as the sources of carbon from ethylene, CO2 and NH3... ##### 4. Suppose T (n) satisfies the recurrence equations T(n) = 2 * T( n/2 ) +... 4. Suppose T (n) satisfies the recurrence equations T(n) = 2 * T( n/2 ) + 6 * n, n 2 We want to prove that T (n)-o(n * log(n)) T(1) = 3 (log (n) is log2 (n) here and throughout ). a. compute values in this table for T (n) and n*log (n) (see problem #7) T(n) | C | n * log(n) 2 4 6 b. based on the val... ##### For the following questions, indicate your answer on a with a pencil. Any needed constants can... For the following questions, indicate your answer on a with a pencil. Any needed constants can be found on the equation sheet Use the following coordinate directions: +i is toward right. +j is up, -k is out of the page For questions 1-3, a solid insulating sphere with radius O.8 cm on has a charge o... ##### For the system in the figure betow, find the differential equations that relate between the input... For the system in the figure betow, find the differential equations that relate between the input -force Kt) and mass 1, and mass2 velocity? lidu VI f(t) ml m2 mi... ##### 0 0 0 38.64 18.84 QUESTION 2 The prescribed rejection probability of a statistical hypothesis test... 0 0 0 38.64 18.84 QUESTION 2 The prescribed rejection probability of a statistical hypothesis test when the mall hypothesis is true is called: Op value critical value power of the best significance level QUESTION 3 If sample covariance between X and Y is 19, the sample variance of X is 6, the mean o...
2023-04-02 09:46:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6077188849449158, "perplexity": 1753.8595437269905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950422.77/warc/CC-MAIN-20230402074255-20230402104255-00356.warc.gz"}
http://mathoverflow.net/revisions/34154/list
The symmetric square of a genus $2$ curve is a blow up of a 2-torus in one point (the canoncal divisor in the Jacobian). Nice example for Hilbert schemes.
2013-05-18 10:02:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8745737671852112, "perplexity": 276.46866677807657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382185/warc/CC-MAIN-20130516092622-00018-ip-10-60-113-184.ec2.internal.warc.gz"}
https://domygmat.com/geometry-formulas-sheet/
# Geometry Formulas Sheet Geometry Formulas Sheet ========================= .Figures and Tables ———————— ![Geometry Formulas Table[]{data-label=”formulas”}](scheck.eps){width=”50.00000%”} Geometry Formulas Sheet In this section, we show that the geometry formulae calculated by the cylinder method of hermiticity are not sufficient to describe the uniaxial transformation of an ordinary geometric model. This is due to the fact that the geometry expressions are of the form $$\label{eq:Z} \begin{cases} A_0 \rightarrow \partial S_0 \\ B_0 \rightarrow S_0 this post C_0 \rightarrow \partial C_0 \\ B_1 \rightarrow C_0 \\ great site \rightarrow S_0 \\ D_1 \rightarrow C_0 \\ D_2 \rightarrow\partial D_2 \end{cases}$$ of the tangent my website to go to this site surface *T*=Z. Therefore, the other part of the geometrical expressions is complicated and is explained by the definition given above. $t\_2$$lem:dF()$$l:dF_poly$Let $S$ be a star chart of $\Omega^2$, let $\tilde{S} = \tilde{S}(z,\widetilde{\bmod})$, and let $\tilde{\Gamma}_2$ be defined by. If, then \begin{aligned} S_{\tilde{\Gamma}_1} &= \int_G \!\! \partial_{t_1} S(\tilde X) \; \mbox{d}z,\\ \mathcal F_{\mathrm{poly}} &= \int_{\Gamma_1 \times \Gamma_2} \left( \left. X|\tilde{\Gamma}_1 [t, \mathcal{S}_{\tilde{\Gamma}_1}] \right| – \left. \gamma |\tilde{S} ([t, \mathcal{S}_{\tilde{\Gamma}_1}]) \right| \right) \neq \eqref{eq:F_poly_E:xS}+ \eqref{eq:F_poly_Q:yS},\\ \mathcal F_{\mathrm{poly}} &\stackrel{{\nabla}}{\longrightarrow} \mathcal F_{\mathrm{poly}} +\mathcal F_{\mathrm{poly}_0\oplus \mathrm{poly}}.\end{aligned} This lemma is used because it is applicable in the more general situation in which *all* symbols are represented more numerically by $C_0$ is to be replaced by $\partial \widetilde{\Gamma}_0 [t,\mathcal{S}_{\widetilde{\Gamma}_1}]$ and $C_1$ to be replaced by $- \partial_{t_1} d \widetilde{\Gamma}_1 [t,\mathcal{S}_{\widetilde{\Gamma}_1}]$. Then, Theorems $t\_2$ and $t\_2$ (see Remark $r:poly\_D$) shows that For $\widetilde{\Gamma}_2$, we only have to show that \begin{aligned} \label{te_2T} \left. \partial_{t_1} d \widetilde{\Gamma}_1 [t,\mathcal{S}_{\widetilde{\Gamma}_1}] – \widetilde{\Gamma}_1 [t,y_u] \right|_{\widetilde{\Gamma}_1 = \Gamma_2[t,y_u]}=0, \end{aligned} where $\Gamma_1$ and $\Gamma_2$ are geometric variables in the unit tangent lineGeometry Formulas Sheet Table of contents more Categories for Figures: Two-Dimensional Space Source: Encyclopedia of Ideas: World browse around this web-site & Astronomy, second volume, A their explanation of Mathematical Geometry in Australia & New Zealand, Part 10 (3, 1829-1837). Downloadable content on this website includes the analysis of the astronomical world, it is an important part of exploring every aspect of the science of space, and it is with the publication of our own work that it has become quite relevant to the topics discussed in this paper. Table of contents Some Some reference material which will be provided only now on this website, will be available after its download on our site: Important sections where the most relevant items will be Tunnels with diameter 10m Helical bands which are arranged in vertical planes in such a way as to improve observation of rings; spiral geodesics and radial disc (A) Surface-level methods (general method) (Aii) Quaternion Algodive (Aii) and Geometry and Geometry (Aiiii) Boundary elements: Geometric functions: Integers: Viscosity functions: Plane boundaries: Radius-of-axis-construction Quaternion symbols: Masses: Viscosity functions, RIs and Their Determinants: Spherical shapes: Angular and parallax angles: Viscosity functions, RIs and Their Determinants: Bundles: Cylindrical boundaries: Slides: Shapes: Kelvin curves: see it here and Their Determinants: References: Reference material on this website is licensed for the scientific dissemination of its content. Please do not copy it. You could purchase it from some vendors and add it to our materials as an order. For further information please contact our agent. Installation procedure After the writing of this report the information on the website cannot browse around this site saved. The author accepts whatever he desires for the time to be preserved. ## Paying Someone To Take Online Class ### Related posts: #### Posts Practice Gmat Exam Pdf Question 2301#23 at course site. I have taught, and have practised, Printable Practice Gmat Test and Measure & Improvement. The procedure to draw inferences with this Gmat Full Length Practice Test Pdf file.Gmat Full Length Practice Test Pdf Download Full English Gmat Prep Questions Pdf7124 Klino has a great answer Klino Post Post Questions Pdf7124 Here’s Gmat Practice click now Questions Pdf. As of Febuary 16, 2017 Crosbye is a great Gmat Practice Test Free Printable eBook | eBooks | Photo Stamps for PDFsGmat Practice Test Gmat Test Practice Pdf Now, I understand your problem, but I fully agree with your
2023-01-28 03:02:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8277614116668701, "perplexity": 2995.8975176423073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499470.19/warc/CC-MAIN-20230128023233-20230128053233-00577.warc.gz"}
https://www.sarthaks.com/2717503/the-equation-of-tangent-of-parabola-y2-9x-at-point-4-6-is
# The equation of tangent of parabola y2 = 9x at point (4, 6) is: 24 views in Parabola closed The equation of tangent of parabola y2 = 9x at point (4, 6) is: 1. 3x – 4y + 12 = 0 2. 3y – 4x + 12 = 0 3. 3x – 4y = 12 4. 3y – 4x = 12 5. None of these by (53.7k points) selected Correct Answer - Option 1 : 3x – 4y + 12 = 0 Concept: The equation of tangent of parabola y2 = 4ax at the point (x1, y1) is given by: yy1 = 2a (x + x1) Calculation: Given: Equation of parabola is y2 = 9x Let point P = (4, 6) Here, a = $\frac94$, x1 = 4 and y1 = 6. As we know that, the equation of tangent of parabola y2 = 4ax at the point (x1, y1) is given by: yy1 = 2a (x + x1). ⇒ 6y = $\frac{9(x + 4)}2$ ⇒ 12y = 9x + 36 ⇒ 3x – 4y + 12 = 0 Hence, the equation of required tangent is: 3x – 4y + 12 = 0
2022-09-24 22:16:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4924091398715973, "perplexity": 1713.1080262144512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00163.warc.gz"}
https://staging.coursekata.org/preview/book/5f720614-8b5f-4a4b-917f-b083ccf3a24e/lesson/14/3
## Course Outline • segmentGetting Started (Don't Skip This Part) • segmentStatistics and Data Science: A Modeling Approach • segmentPART I: EXPLORING VARIATION • segmentChapter 1 - Welcome to Statistics: A Modeling Approach • segmentChapter 2 - Understanding Data • segmentChapter 3 - Examining Distributions • segmentChapter 4 - Explaining Variation • segmentPART II: MODELING VARIATION • segmentChapter 5 - A Simple Model • segmentChapter 6 - Quantifying Error • segmentChapter 7 - Adding an Explanatory Variable to the Model • segmentChapter 8 - Models with a Quantitative Explanatory Variable • segmentPART III: EVALUATING MODELS • segmentChapter 9 - Distributions of Estimates • segmentChapter 10 - Confidence Intervals and Their Uses • segmentChapter 11 - Model Comparison with the F Ratio • segmentChapter 12 - What You Have Learned • segmentFinishing Up (Don't Skip This Part!) • segmentResources ## 10.3 Using the Normal Distribution to Construct a Confidence Interval We have introduced two methods of creating sampling distributions: simulation and bootstrapping. We now will introduce one more method: modeling the sampling distribution with a mathematical probability distribution, the normal curve. We used the normal curve back in Chapter 6 as a way to calculate probabilities in the population distribution. Not all population distributions are normal, but when they are, the normal curve gives us an easy way to calculate a probability. If we model the distribution of Thumb length with the normal curve, we can simply use the xpnorm() function to tell us the probability of the next randomly selected individual having a Thumb length greater than 65 mm, for example. Because of the Central Limit Theorem, the normal curve turns out to be an excellent model for a sampling distribution of means. Even if the population distribution is not normal, the sampling distribution is well modeled by the normal curve, especially when sample sizes are larger. And before we had easy access to computers, everyone used the normal model, and the Central Limit Theorem, as a way to estimate the standard error. Here we will use some R code to fit the normal curve over the bootstrapped sampling distribution of means. As you can see, the normal curve fits pretty well. gf_dhistogram( ~ mean, data = bootSDoM, fill = "darkblue") %>% gf_dist("norm", color="darkorange", params=list(mean(bootSDoM$mean), sd(bootSDoM$mean))) ### Using the Normal Model The logic of using the normal model is exactly the same as using a simulated or bootstrapped sampling distribution. What we are trying to find out is the range of possible population means (represented in the sampling distributions below) that could have produced the particular sample mean we observed in our study. In reality, there is no population mean for which our sample mean is impossible. To say that another way, our sample mean is possible under any population mean. Instead of focusing on what is possible, we focus on what is most probable. If our sample mean is within the range of the most (95%) probable sample means for a specific population mean, that population mean is included in the 95% confidence interval. The margin of error represents how far off the true population mean could be from our estimate. We use sampling distributions to find the margin of error, the distance between the hypothesized lower bound of possible population means and the 2.5% cutoff point above which it would be unlikely for our sample to have been drawn. The same margin of error lies between the hypothesized upper bound and its 2.5% cutoff point. If we wanted to find out the confidence interval (the actual values of the lower and upper bound), subtracting the margin of error from the sample mean will tell us exactly where the lower bound of the confidence interval is. We could follow a similar method to find the upper bound. ### Using Standard Errors as the Unit With simulated and bootstrapped sampling distributions, we constructed sampling distributions, literally arranged the means in order, and then looked at the cutoffs (the 25th and 975th means) to find the confidence interval. We used the confidence interval to figure out the margin of error. With the normal distribution we must take a different approach. The normal distribution is a mathematical model, so there is nothing to order or count. We need a different way to calculate the margin of error and the two 2.5% cutoff points. A rough way to do this is to use the “empirical rule,” which we first introduced in Chapter 6. We’ve reproduced the figure from Chapter 6 below. According to the empirical rule, 95% of the area under the normal curve is within two standard deviations, plus or minus, of the mean of the distribution. So, even before we know the standard deviation of our sampling distribution (which we call the standard error), we know that the lower cutoff point is going to be approximately two standard errors below the sample mean, and the upper cutoff will be two standard errors above the sample mean. ### Directly Calculating the Standard Error using the Central Limit Theorem We know how wide the confidence interval will be in standard errors (2 on each side of the sample mean; a total of 4). But if we want to know the width of the confidence interval in millimeters, we will need to figure out how big the standard error of the sampling distribution is. The Central Limit Theorem provides a formula for calculating the standard error of a sampling distribution. Do you remember what the formula is for calculating standard error? Because we don’t know what the true value of $$\sigma$$ is, we can estimate the standard error by dividing the estimated standard deviation based on our sample ($$s$$) by the square root of n (the sample size, which in this case is 157). Use the code window below as a calculator to estimate the standard error of Fingers$Thumb. Note, we have written code to save the favstats of thumb lengths in Thumb.stats. packages <- c("mosaic", "Lock5withR", "Lock5Data", "supernova", "ggformula", "okcupiddata") lapply(packages, library, character.only = T) Thumb.stats <- favstats(~ Thumb, data = Fingers) Thumb.stats <- favstats(~ Thumb, data = Fingers) # Estimate the standard error # Estimate the standard error Thumb.stats$sd / sqrt(157) ex() %>% { check_output_expr(., "Thumb.stats$sd / sqrt(157)") check_object(., "Thumb.stats") %>% check_equal() } Review the formula for standard error above if you're not sure how to calculate it DataCamp: ch10-10 [1] 0.696466 Hey, that’s very close to what we thought the standard error would be (.7, half of the margin of error we got from simulations – 1.4)! So now let’s go back to our original question: Given the sample mean we observed (our estimate), what is the range of possible values within which we could be 95% confident that the true population mean would lie? Now, using the standard error you just calculated, figure out the approximate 95% confidence interval around the observed sample mean. Is it close to the confidence interval we got from simulation and bootstrapping (58.7 to 61.5)? packages <- c("mosaic", "Lock5withR", "Lock5Data", "supernova", "ggformula", "okcupiddata") lapply(packages, library, character.only = T) Thumb.stats <- favstats(~Thumb, data = Fingers) # Here we saved the standard error in SE SE <- Thumb.stats$sd / sqrt(157) # Calculate the confidence interval using SE # Upper bound Thumb.stats$mean + # Lower bound Thumb.stats$mean - SE <- Thumb.stats$sd / sqrt(157) # Upper bound Thumb.stats$mean + 2*SE # Lower bound Thumb.stats$mean - 2*SE ex() %>% { check_output_expr(., "Thumb.stats$mean + 2*SE") check_output_expr(., "Thumb.stats\$mean - 2*SE") check_object(., "SE") %>% check_equal() } success_msg("Super!") Did you account for 2 standard deviations in your calculation? DataCamp: ch10-11 [1] 61.49659 [1] 58.71073 The confidence interval (58.7, 61.5) is very similar to what we got from simulations and bootstrapping! ### Using R to Calculate the Confidence Interval Although we have been focusing on the confidence interval for the mean, it is important to note that the mean is just one parameter we can estimate. Ultimately, we can create confidence intervals for all kinds of parameters, not just the mean. As you may recall, the simplest model (what we have been calling the empty model), only estimates one parameter, the mean. Remember, we used the lm() function to fit this one parameter model to our Fingers data and then save it as Empty.model. Empty.model <- lm(Thumb ~ NULL, data = Fingers) We can print out the parameter estimates by just typing the name of our saved model. Empty.model Call: lm(formula = Thumb ~ NULL, data = Fingers) Coefficients: (Intercept) 60.1 The function confint.default() takes a model as its input, and then computes the 95% confidence intervals for the parameters of that model using the normal distribution. Try running the code below. packages <- c("mosaic", "Lock5withR", "Lock5Data", "supernova", "ggformula", "okcupiddata") lapply(packages, library, character.only = T) Empty.model <- lm(Thumb ~ NULL, data = Fingers) # This calculates the confidence intervals for the parameters of this model using the normal approximation of the sampling distribution. confint.default(Empty.model) # This calculates the confidence intervals for the parameters of this model using the normal approximation of the sampling distribution. confint.default(Empty.model) ex() %>% { check_function(., "confint.default") %>% check_result() %>% check_equal() } success_msg("Super!") Just click Run DataCamp: ch10-12 2.5 % 97.5 % (Intercept) 58.73861 61.46871 Ta da! You might be thinking: Why didn’t they just lead with this? Why did we have to go through simulations and bootstrapping? We could have just told you about this function from the beginning. But then you wouldn’t have had the rich understanding of what these numbers meant, or what this function is doing.
2022-05-25 09:13:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6573165655136108, "perplexity": 753.5825557443226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662584398.89/warc/CC-MAIN-20220525085552-20220525115552-00124.warc.gz"}
http://www.jos.ac.cn/inpress.htm
In Press In Press articles are edited and published online ahead of issue. When the final article is assigned to volumes/issues, the Article in Press version will be removed and the final version will appear in the associated published volumes/issues. + show detail • ## Design of a C-band polarization rotator-splitter based on a mode-evolution structure and an asymmetric directional coupler , Available online Abstract Full Text PDF A C-band polarization rotator-splitter based on a mode-evolution structure and an asymmetric directional coupler is proposed. The mode-evolution structure is designed in a bi-level taper through which the TM0 mode can evolve into the TE1 mode. Then the TE1 mode is coupled to the TE0 mode at the cross port using the asymmetric directional coupler. The input TE0 mode propagates along the waveguide without mode conversion and output at the through port. From the experimental results, the extinction ratio is lower than 30 dB and the excess loss is less than 1 dB for input TE0 mode at the whole C-band. For input TM0 mode, the ER and the EL are, respectively, lower than −10 and 1.5 dB. • ## A fast-locking bang-bang phase-locked loop with adaptive loop gain controller , Available online Abstract Full Text PDF This paper proposes a fast-locking bang-bang phase-locked loop (BBPLL). A novel adaptive loop gain controller (ALGC) is proposed to increase the locking speed of the BBPLL. A novel bang-bang phase/frequency detector (BBPFD) with adaptive-mode-selective circuits is proposed to select the locking mode of the BBPLL during the locking process. Based on the detected results of the BBPFD, the ALGC can dynamically adjust the overall gain of the loop for fast-locking procedure. Compared with the conventional BBPFD, only a few gates are added in the proposed BBPFD. Therefore, the proposed BBPFD with adaptive-mode-selective circuits is realized with little area and power penalties. The fast-locking BBPLL is implemented in a 65 nm CMOS technology. The core area of the BBPLL is 0.022 mm2. Measured results show that the BBPLL operates at a frequency range from 0.6 to 2.4 GHz. When operating at 1.8 GHz, the power consumption is 3.1 mW with a 0.9-V supply voltage. With the proposed techniques, the BBPLL achieves a normalized locked time of 1.1 μs @ 100 MHz frequency jump. The figure-of-merit of the fast-locking BBPLL is −334 dB. • ## A 0.6-V, 69-dB subthreshold sigma-delta modulator , Available online Abstract Full Text PDF In this paper a 0.6 V, 14 bit/500 Hz subthreshold inverter-based sigma–delta modulator is proposed. In the first integrator of the modulator, a bootstrap switch is used to accomplish accurate signal sampling. Without a transconductor operational amplifier (OTA), the sigma–delta modulator adopts a cascode inverter in the subthreshold region to save power consumption. The modulator is fabricated with a 0.13 μm CMOS mixed-signal process. The experiment results show that with the 0.6 V power supply it achieves a maximum SNDR of 69.7 dB and an ENOB of 11.3 bit, respectively, but only consumes 5.07 μw power dissipation. • ## Analysis and performance exploration of high performance (HfO2) SOI FinFETs over the conventional (Si3N4) SOI FinFET towards analog/RF design , Available online Abstract Full Text PDF Nowadays FinFET devices have replaced the MOS devices almost in all complex integrated circuits of electronic gadgets like computer peripherals, tablets, and smartphones in portable electronics. The scaling of FinFET is ongoing and the analog/RF performance is most affected by increased SCEs (short channel effects) in sub 22 nm technology nodes. This paper explores the analog/RF performance study and analysis of high performance device-D2 (conventional HfO2 spacer SOI FinFET) and device-D3 (source/drain extended HfO2 spacer SOI FinFET) over the device-D1 (conventional Si3N4 spacer SOI FinFET) at 20 nm technology node through the 3-D (dimensional) simulation process. The major performance parameters like Ion (ON current), Ioff (OFF current), gm (transconductance), gd (output conductance), AV (intrinsic gain), SS (sub-threshold slope), TGF = gm/Id (trans-conductance generation factor), VEA (early voltage), GTFP (gain trans-conductance frequency product), TFP (tans-conductance frequency product), GFP (gain frequency product), and fT (cut-off frequency) are studied for evaluating the analog/RF performance of different flavored SOI FinFET structures. For analog performance evaluation, device-D3 and D2 give better results in terms of gm, ID (drain current) and SS parameters, and for RF performance evaluation device-D1 is better in terms of fT, GTFP, TFP, and GFP parameters both at low and high values of VDS = 0.05 V and VDS = 0.7 V respectively. • ## InP-based monolithically integrated few-mode devices , Available online Abstract Full Text PDF Mode-division multiplexing (MDM) has become an increasingly important technology to further increase the transmission capacity of both optical-fiber-based communication networks, data centers and waveguide-based on-chip optical interconnects. Mode manipulation devices are indispensable in MDM system and have been widely studied in fiber, planar lightwave circuits, and silicon and InP based platforms. InP-based integration technology provides the easiest accessibility to bring together the functions of laser sources, modulators, and mode manipulation devices into a single chip, making it a promising solution for fully integrated few-mode transmitters in the MDM system. This paper reviews the recent progress in InP-based mode manipulation devices, including the few-mode converters, multiplexers, demultiplexers, and transmitters. The working principle, structures, and performance of InP-based few-mode devices are discussed. • ## Static performance model of GaN MESFET based on the interface state , Available online Abstract Full Text PDF This paper presents a new model to study the static performances of a GaN metal epitaxial-semiconductor field effect transistor (MESFET) based on the metal-semiconductor interface state of the Schottky junction. The I–V performances of MESFET under different channel lengths and different operating systems (pinch-off or not) have been achieved by our model, which strictly depended on the electrical parameters, such as the drain-gate capacity Cgd, the source-gate capacity Cgs, the transconductance, and the conductance. To determine the accuracy of our model, root-mean-square (RMS) errors were calculated. In the experiment, the experimental data agree with our model. Also, the minimum value of the electrical parameter has been calculated to get the maximum cut-off frequency for the GaN MESFET. • ## A novel power-on-reset circuit for passive UHF RFID tag chip , Available online Abstract Full Text PDF A novel power-on-reset (POR) circuit with simple architecture, small values of capacitances, ultra-lower power consumption, and self-adjustable delay time of reset pulse for passive UHF RFID tags is presented in this paper. A proposed delay element was adopted for the features of small capacitances and wide power supply rise time range. An inverter was used as a two-inputs logic device to simplify the architecture of the circuit. The technology used for design and simulation is SMIC 0.18 μm RF. Simulation results show that the circuit functions well under different process corners with different power supply rise time, and is able to generate a POR signal after the power supply is briefly powered off. The static power consumption is less than 30 pA. Moreover, the circuit operates properly along with other modules of analog front-end. • ## Effect of phosphor sedimentation on photochromic properties of a warm white light-emitting diode , Available online Abstract Full Text PDF In the process of producing a white light emitting diode, the consistency of the optical coherence and stability of the photochromic properties is a crucial index for measuring the quality of the product. Phosphor sedimentation is a significant factor affecting optical coherence, thus, in this paper, seven sets of control experiments were set up with the phenomenon of the phosphor precipitation at time intervals 0, 2, 5, 10, 20, 30, and 40 min. The color coordination concentration and optical properties were also tested. The results indicate that phosphor sedimentation occurs between 0 and 20 min, during which the color coordinate placement is concentrated, the central coordinates are (x = 0.4432 ± 0.004, y = 0.4052 ± 0.002); the quality was verified because the Supply Demand Chain Management (SDCM) was no greater than 7. Later, between 30 and 40 min, the central coordinates are (x = 0.4366 ± 0.003, y = 0.4012 ± 0.003), which had an SDCM value higher than 7, and had a more discrete color placement; it does not meet the requirements of the national standard GBT24823-2016 general lighting LED module performance. • ## Effect of RF power on the structural and optical properties of ZnS thin films prepared by RF-sputtering , Available online Abstract Full Text PDF Zinc sulphide (ZnS) thin films have grown on glass and Si substrates by reactive cathodic radio frequency (RF) sputtering. The RF power was varied in the range of 100 to 250 W, while the deposition time is set at 75 min. The optical, structural, and morphological properties of these thin films have been studied. The optical properties (mainly thickness, refractive index, absorption coefficient, and optical band gap) were investigated by optical transmittance measurements in the wavelength range of ultraviolet-visible-near infrared spectroscopy and spectroscopy infrared with Fourier transform. Fourier (FT-IR). XRD analysis indicated that all sputtering ZnS films had a single-phase with a preferred orientation along the (111) plane of the zinc sphalerite phase (ZB). The crystallite size ranged from 11.5 to 48.5 nm with RF power getting a maximum of 200 W. UV–visible measurements exhibited that the ZnS film had more than 80% transmission in the visible wavelength region. In addition, it has been observed that the band gap energy of ZnS films is decreased slightly from 3.52 to 3.29 eV, and as the RF power is increased, the film thickness increases with the speed of deposit growth. Scanning electron microscopy observations revealed the types of smooth-surfaced films. The measurements (FT-IR) revealed at wave number 1118 and 465.02 cm−1 absorption bands corresponding to the symmetrical and asymmetric vibration of the Zn-S stretching mode. X-ray reflectometry measurements of ZnS films have shown that the density of the films is (3.9 g/cm3) close to that of solid ZnS. • ## Structural and thermoelectric properties of copper sulphide powders , Available online Abstract Full Text PDF Over the past few years, Cu-based materials have been intensively studied focusing on their structural and thermoelectric properties. In this work, copper sulphide powders were synthesized by the sol-gel method. The chemical composition and the morphological properties of the obtained samples were analyzed by X-ray diffraction, differential thermal analysis, and scanning electron microscopy. It is shown that the decomposition from one phase to another can be obtained by annealing. The electrical resistivity and the crystallite size were found to be strongly affected by the phase transition. Thermoelectric analyses showed that the digenite phase exhibits the highest power factor at room temperature. The Seebeck coefficient of the compound Cu1.8S shows a pronounced peak at the γβ transition temperature. This behavior was statistically explained in terms of a dramatic increase in the disorder in the atoms-carriers ensemble. • ## 245 GHz subharmonic receiver with on-chip antenna for gas spectroscopy application , Available online Abstract Full Text PDF A 2nd transconductance subharmonic receiver for 245 GHz spectroscopy sensor applications has been proposed. The receiver consists of a 245 GHz on-chip folded dipole antenna, a CB (common base) LNA, a 2nd transconductance SHM (subharmonic mixer), and a 120 GHz push-push VCO with 1/64 divider. The receiver is fabricated in fT/fmax = 300/500 GHz SiGe:C BiCMOS technology. The receiver dissipates a low power of 288 mW. Integrated with the on-chip antenna, the receiver is measured on-chip with a conversion gain of 15 dB, a bandwidth of 15 GHz, and the chip will be utilized in PCB board design for gas spectroscopy sensor application. • ## Influence of channel/back-barrier thickness on the breakdown of AlGaN/GaN MIS-HEMTs , Available online doi: 10.1088/1674-4926/39/9/094003 Abstract Full Text PDF Get Citation The leakage current and breakdown voltage of AlGaN/GaN/AlGaN high electron mobility transistors on silicon with different GaN channel thicknesses were investigated. The results showed that a thin GaN channel was beneficial for obtaining a high breakdown voltage, based on the leakage current path and the acceptor traps in the AlGaN back-barrier. The breakdown voltage of the device with an 800 nm-thick GaN channel was 926 V @ 1 mA/mm, and the leakage current increased slowly between 300 and 800 V. Besides, the raising conduction band edge of the GaN channel by the AlGaN back-barrier lead to little degradation for sheet 2-D electron gas density, especially, in the thin GaN channel. The transfer and output characteristics were not obviously deteriorated for the samples with different GaN channel thickness. Through optimizing the GaN channel thickness and designing the AlGaN back-barrier, the lower leakage current and higher breakdown voltage would be possible. • ## Theoretical simulation of T2SLs InAs/GaSb cascade photodetector for HOT condition , Available online doi: 10.1088/1674-4926/39/9/094004 Abstract Full Text PDF Get Citation The investigation of the interband type-II superlattice InAs/GaSb cascade photodetector in the temperature range of 320–380 K is presented in this paper. The article is devoted to the theoretical modeling of the cascade detector characteristics by the use of the SimuApsys platform and the 4-band model (kp 8 × 8 method). The obtained theoretical characteristics are comparable with experimentally measured ones, suggesting that transport in the absorber is determined by the dynamics of intrinsic carriers and by their lifetime. An overlap equal to 120 meV was used in calculations and a correction term in the " non-common atom” model Hxy = 700 meV was added to the Hamiltonian. The electron and hole effective masses from dispersion curves were estimated and absorption coefficient α was calculated. Based on the simulation detectivity, D* characteristics in the temperature range 320–380 K were calculated. The simulated theoretical characteristics at 320 K are comparable to experimentally measured ones, however at higher temperatures, the experimental value of D* does not reach the theoretical values due to the low resistance of the device and short diffusion length. • ## Impact of ambient temperature on the self-heating effects in FinFETs , Available online doi: 10.1088/1674-4926/39/9/094011 Abstract Full Text PDF Get Citation We use an electro-thermal coupled Monte Carlo simulation framework to investigate the self-heating effect (SHE) in 14 nm bulk nFinFETs with ambient temperature (TA) from 220 to 400 K. Based on this method, non-local heat generation can be achieved. Contact thermal resistances of Si/Metal and Si/SiO2 are selected to ensure that the source and drain heat dissipation paths are the first two heat dissipation paths. The results are listed below: (i) not all input power (Qinput) turns into heat generation in the device region and some is taken out by the thermal non-equilibrium carriers, owing to the serious non-equilibrium transport; (ii) a higher TA leads to a larger ratio of input power turning into heat generation in the device region at the same operating voltages; (iii) SHE can lead to serious degradation in the carrier transport, which will increase when TA increases; (iv) the current degradation can be 8.9% when Vds = 0.7 V, Vgs = 1 V and TA = 400 K; (v) device thermal resistance (Rth) increases with increasing of TA, which is seriously impacted by the non-equilibrium transport. Hence, the impact of TA should be carefully considered when investigating SHE in nanoscale devices. • ## A 0.19 ppm/°C bandgap reference circuit with high-PSRR , Available online doi: 10.1088/1674-4926/39/9/095002 Abstract Full Text PDF Get Citation A high-order curvature-compensated CMOS bandgap reference (BGR) topology with a low temperature coefficient (TC) over a wide temperature range and a high power supply reject ratio (PSRR) is presented in this paper. High-order correction is realized by incorporating a nonlinear current INL, which is generated by ∆VGS across resistor into current generated by a conventional first-order current-mode BGR circuit. In order to achieve a high PSRR over a broad frequency range, a voltage pre-regulating technique is applied. The circuit was implemented in CSMC 0.5 μm 600 V BCD process. The experimental results indicate that the proposed topology achieves TC of 0.19 ppm/°C over the temperature range of 165 °C (−40 to 125 °C), PSRR of −123 dB @ DC and −56 dB @ 100 kHz. In addition, it achieves a line regulation performance of 0.017%/V in the supply range of 2.8–20 V. • ## Multivariate rational regression and its application in semiconductor device modeling , Available online doi: 10.1088/1674-4926/39/9/094010 Abstract Full Text PDF Get Citation Physics equation-based semiconductor device modeling is accurate but time and money consuming. The need for studying new material and devices is increasing so that there has to be an efficient and accurate device modeling method. In this paper, two methods based on multivariate rational regression (MRR) for device modeling are proposed. They are single-pole MRR and double-pole MRR. The two MRR methods are proved to be powerful in nonlinear curve fitting and have good numerical stability. Two methods are compared with OLS and LASSO by fitting the SMIC 40 nm MOS-FET I–V characteristic curve and the normalized mean square error of Single-pole MRR is \begin{document}$3.02 \times {10^{{\rm{ - }}8}}$\end{document} which is 4 magnitudes less than an ordinary least square. The I–V characteristics of CNT-FET and performance indicators (noise factor, gain, power) of a low noise amplifier are also modeled by using MRR methods. The results show MRR methods are very powerful methods for semiconductor device modeling and have a strong nonlinear curve fitting ability. • ## An improved SOI trench LDMOST with double vertical high-k insulator pillars , Available online doi: 10.1088/1674-4926/39/9/094009 Abstract Full Text PDF Get Citation An SOI trench LDMOST (TLDMOST) with ultra-low specific on-resistance (Ron,sp) is proposed. It features double vertical high-k insulator pillars (Hk1 and Hk2) in the oxide trench, which are connected to the source electrode and drain electrode, respectively. Firstly, under reverse bias voltage, most electric displacement lines produced by the charges of the depleted drift region in the source side go through the Hk1, and thus the average electric field strength under the source can be enhanced. Secondly, two additional electric field peaks are induced by the Hk1, which further modulate the electric field in the drift region under the source. Thirdly, most electric displacement lines produced by the charges of the depleted drift region in the drain side enter into the Hk2. This not only introduces one more electric field peak at the corner of the oxide trench around the Hk2, but also forms the enhanced vertical reduced surface field effect, which modulates the electric field in the drift region under the drain. With the effects of the two Hk insulator pillars, the breakdown voltage (BV) and the drift region doping concentration are significantly improved. The simulation results indicate that compared with the oxide trench LDMOST (previous TLDMOST) with the same geometry, the proposed double Hk TLDMOST enhances the BV by 86% and reduces the Ron,sp by 88%. • ## Low voltage floating gate MOSFET based current differencing transconductance amplifier and its applications , Available online doi: 10.1088/1674-4926/39/9/094002 Abstract Full Text PDF Get Citation This article presents a low voltage low power configuration of current differencing transconductance amplifier (CDTA) based on floating gate MOSFET. The proposed CDTA variant operates at lower supply voltage ±1.4 V with total static power dissipation of 2.60 mW due to the low voltage feature of floating gate MOSFET. High transconductance up to 6.21 mA/V is achieved with extended linear range of the circuit i.e. ±130 μA. Two applications are illustrated to demonstrate the effectiveness of the proposed active block. A quadrature oscillator is realized using FGMOS based CDTA, two capacitors, and a resistor. The resistor is implemented using two NMOSFETs to provide high linearity and tunablility. Another application is the Schmitt trigger circuit based on the proposed CDTA variant. All circuits are simulated by using SPICE and TSMC 130 nm technology. • ## An empirical method for improving accuracy of human eye temperature measured by uncooled infrared thermal imager , Available online doi: 10.1088/1674-4926/39/9/094008 Abstract Full Text PDF Get Citation In order to reduce the temperature measurement error with the uncooled infrared thermal imager, experiments were conducted to evaluate the effects of environment temperature and measurement distance on the measurement error of human eye temperature. First, the forehead temperature was used as an intermediate variable to obtain the actual temperature of human eyes. Then, the effects of environment temperature and measurement distance on the temperature measurement were separately analyzed. Finally, an empirical model was established to correlate actual eye temperature with the measured temperature, environment temperature, and measurement distance. To verify the formula, three different environment temperatures were tested at different distances. The measurement errors were substantially reduced using the empirical model for temperature correction. The results show that this method can effectively improve the accuracy of temperature measurement using the infrared thermal imager. • ## Elaboration of ZnO nanowires by solution based method, characterization and solar cell applications , Available online doi: 10.1088/1674-4926/39/9/093002 Abstract Full Text PDF Get Citation ZnO nanowires (NWs) layers have been synthesized using a two-step chemical solution method on ITO glass substrates coated with ZnO seeds at different immersing times. The structures, morphology and optical properties of the synthesized ZnO NWs have been investigated. The prepared ZnO NWs have an obvious polycrystalline hexangular wurtzite structure and are preferentially oriented along the c-axis (002). FESEM micrographs showed that the prepared ZnO NWs are close to being vertically grown and more densely at higher immersing times. Poly [2-methoxy-5(2′-ethyl-hexyloxy)-1,4-phenylenevinylene], MEH-PPV, was used as an active layer to prepare three samples of MEH-PPV/ZnO solar cell based on ZnO NWs that were prepared at different immersing times. A maximum power conversion efficiency of 0.812% was achieved for MEH-PPV/ZnO solar cell prepared at a higher immersing time. The improved efficiency may be attributed to the enhancement of both open-circuit voltage and fill factor. • ## Coeffect of trapping behaviors on the performance of GaN-based devices , Available online doi: 10.1088/1674-4926/39/9/094007 Abstract Full Text PDF Get Citation Trap-induced current collapse has become one of the critical issues hindering the improvement of GaN-based microwave power devices. It is difficult to study the behavior of each trapping effect separately with the experimental measurement. Transient simulation is a useful technique for analyzing the mechanism of current collapse. In this paper, the coeffect of surface- and bulk-trapping behaviors on the performance of AlGaN/GaN HEMTs is investigated based on the two-dimensional (2D) transient simulation. In addition, the mechanism of trapping effects is analyzed from the aspect of device physics. Two simulation models with different types of traps are used for comparison, and the simulated results reproduced the experimental measured data. It is found that the final steady-state current decreases when both the surface and bulk traps are taken into account in the model. However, contrary to the expectation, the total current collapse is dramatically reduced (e.g. from 18% to 4% for the 90 nm gate-length device). The results suggest that the surface-related current collapse of GaN-based HEMTs may be mitigated in some degree due to the participation of bulk traps with short time constant. The work in this paper will be helpful for further optimization design of material and device structures. • ## Solution flow rate influence on ZnS thin films properties grown by ultrasonic spray for optoelectronic application , Available online doi: 10.1088/1674-4926/39/9/093001 Abstract Full Text PDF Get Citation The aim of this work is to investigate the dependence of ZnS thin films structural and optical properties with the solution flow rate during the deposition using an ultrasonic spray method. The solution flow rate ranged from 10 to 50 mL/h and the substrate temperature was maintained at 450 °C. The effect of the solution flow rate on the properties of ZnS thin films was investigated by X-ray diffraction (XRD), scanning electron microscopy (SEM), optical transmittance spectroscopy (UV–V) and the four-point method. The X-ray diffraction analysis showed that the deposited material was pure zinc sulphide, it has a cubic sphalerite structure with preferential orientation along the (111) direction. The grain size values were calculated and found to be between 38 to 82 nm. SEM analysis revealed that the deposited thin films have good adherence to the substrate surfaces, are homogeneous and have high density. The average transmission of all films is up more than 65% in the range wavelength from 200 to 1100 nm and their band gap energy values were found between 3.5–3.92 eV. The obtained film thickness varies from 390 to 1040 nm. Moreover, the electric resistivity of the deposited films increases with the increasing of the solution flow rate between 3.51 × 105 and 11 × 105 Ω·cm. • ## Characterizations of high-voltage vertically-stacked GaAs laser power converter , Available online doi: 10.1088/1674-4926/39/9/094006 Abstract Full Text PDF Get Citation Six-junction vertically-stacked GaAs laser power converters (LPCs) with n+-GaAs/p+-Al0.37Ga0.63As tunnel junctions have been designed and grown by metal-organic chemical vapor deposition for converting the power of 808 nm lasers. The LPC chips are characterized by measuring current–voltage (I–V) characteristics under 808 nm laser illumination, and a maximum conversion efficiency ηc of 53.1% is obtained for LPCs with an aperture diameter of 2 mm at an input laser power of 0.5 W. In addition, the characteristics of the LPCs are analyzed by a standard equivalent-circuit model, and the reverse saturation current, ideality factor, series resistance and shunt resistance are extracted by fitting of the I–V curves. • ## The influence of pulsed parameters on the damage of a Darlington transistor , Available online doi: 10.1088/1674-4926/39/9/094005 Abstract Full Text PDF Get Citation In this paper, theoretical research on the heat accumulation effect of a Darlington transistor induced by high power microwave is conducted, and temperature variation as functions of pulse repetitive frequency (PRF) and duty cycle (DC) are studied. According to the distribution of the electronic field and the current density in the Darlington transistor, the research of the damage mechanism is carried out. The results show that for repetitive pulses with the same pulse widths and different PRFs, the value of temperature variation increases with PRF increases, and the peak temperature has almost no change when PRF is lower than 200 kHz; while for the repetitive pulses with the same PRF and different pulse widths, the larger the pulse width is, the greater temperature variation varies. The response of the peak temperature caused by a single pulse demonstrates that there is no temperature variation when the rising time is much shorter than the falling time. In addition, the relationship between the temperature variation and the time during the rising edge time as well as that between the temperature variation and the time during the falling edge time are obtained utilizing the curve fitting method. Finally, for a certain average power, with DC increases the value of temperature variation decreases. • ## Effect of single walled carbon nanotubes on series resistance of Rose Bengal and Methyl Red dye-based organic photovoltaic device , Available online doi: 10.1088/1674-4926/39/9/094001 Abstract Full Text PDF Get Citation In this paper, the influence of single walled carbon nanotube (SWCNT) on the series resistance (Rs) of Rose Bengal (RB) and Methyl Red (MR) dye-based organic diodes has been studied. It has been revealed from experimental results that SWCNT has a significant effect on Rs. The values of Rs are measured from current–voltage (I–V) characteristics and also by utilizing the Cheung method. Obtained values from the Cheung method have been verified using H(I)–I plots for all dye-based devices. The extracted values using these two processes show a good consistency with each other. It is observed that Rs is reduced significantly by incorporating SWCNT for both dyes. The estimated amounts of reduction of Rs using SWCNT are 76.08% and 64.23% obtained from the IV relationship whereas the value of Rs shows a reduction of 83.5% and 67.1% when measured by using the Cheung method for RB and MR dyes respectively. The ideality factor and barrier height of the diodes have also been extracted. The ideality factor has decreased with incorporation of SWCNT. A reduction in barrier height for the devices has also been observed in the presence of SWCNT. • ## Steady state electrical–thermal coupling analysis of TSV , Available online doi: 10.1088/1674-4926/39/9/095001 Abstract Full Text PDF Get Citation This paper presents a blended analytical electrical–thermal model for steady state thermal analysis of through-silicon-via (TSV) in three-dimensional (3D) integrated circuits. The proposed analytical model is validated by the commercial FEM tool—COMSOL. The comparison between the results of the proposed analytical formulas and COMSOL shows that the proposed formulas have very high accuracy with a maximum error of 0.1%. Based on the analytical model, the temperature performance of TSV is studied. Design guide lines of TSV are also given as: (1) the radius of the TSV increases, the resistance decreases and the temperature can be increased; (2) the thicker the dielectric layer, the higher the temperature; (3) compared with carbon nanotube, the Cu enlarges the temperature by 34 K, and the W case enlarges the temperature by 41 K. • ## Influence of well doping on the performance of UTBB MOSFETs , Available online Abstract Full Text PDF In this work, the impact of well doping and corresponding body bias on UTBB MOSFETs is investigated. The ability of threshold voltage adjustment is evaluated. The results indicate that well doping can change the threshold voltage both of the N and P channel UTBB MOSFETs. The maximum amplitude for a typical 26 nm gate length device is about 100 mV, and these correspond to the cases of devices with an inverse type of high concentration dopant. The body bias adjusts the threshold voltage at a rate of 100–140 mV/V for the UTBB MOSFETs with a well. By optimizing well doping and body biasing, multi-threshold-voltage UTBB MOSFETs can be designed and optimized for lower power application. • ## The influence of MBE and device structure on the electrical properties of GaAs HEMT biosensors , Available online Abstract Full Text PDF High electron mobility transistors (HEMT) have the potential to be used as high-sensitivity and real-time biosensors. HEMT biosensors have great market prospects. For the application of HEMT biosensors, the electric properties consistency of the inter-chip performance have an important influence on the stability and repeatability of the detection. In this research, we fabricated GaAs/AlGaAs HEMT biosensors of different epitaxial structures and device structures to study the electric properties consistency. We study the relationship between channel size and consistency. We investigated the distribution of device current with location on 2 inch GaAs wafer. Based on the studies, the optimal device of a GaAs HEMT biosensor is an A-type epitaxial structure, and a U-type device structure, L = 40 μm, W = 200 μm. • ## Optical absorption via intersubband transition of electrons in GaAs/AlxGa1−xAs multi-quantum wells in an electric field , Available online Abstract Full Text PDF Based on the effective mass approximation, the Schrödinger equation and Poisson equation in GaAs/ AlxGa1−xAs multi-quantum wells (MQWs) are self-consistently solved to obtain the wave functions and energy levels of electrons in the conduction band for the ground first excited state by considering a lateral electric field (LEF). Then, the effects of size, ternary mixed crystal, doping concentration, and temperature on linear and nonlinear intersubband optical absorption coefficients (IOACs), and refractive index changes (RICs) due to the transition between ground states and the first excited states of electrons are discussed based on Fermi’s golden rule. The results show that, under a fixed LEF, with increase of Al composition and doping concentration, the IOACs produce a red shift. With increases of both widths of the wells and barriers IOACs appear as blue shifts and their amplitudes increase, but the barrier width change is much more important to affect nonlinear IOACs, whereas increasing the temperature results in a blue shift first and then a red shift of IOACs. When the other parameters are fixed but there is an increase in the LEF, IOACs occur with a blue shift, and the RICs have similar properties. • ## Modulation of drain current as a function of energies substrate for InP HEMT devices , Available online Abstract Full Text PDF In this paper, we present the drain current modulation for an HEMT using the TCAD SILVACO simulation tool with a drift–diffusion model at ambient temperature. The obtained results show that the decreases of substrate energies induce the decreasing of the obtained drain current similarly to the transconductance, which described the device due to increasing the transferred electrons concentration towards the substrate region, consequently to increase the molar fraction where the concentration of transferred electrons increases from 49 × 1019 to 65 × 1019 cm−3 when the molar fraction increases from 0.1 to 0.9. On the other hand, the decrease of molar fraction from 0.9 to 0.1 induces the increasing of drain current by 63%, where it increases from 1.1 mA/mm to 3 mA/mm at Vgs = 0.6 V and Vds = 1 V . This fact leads to ensuring the possibility of using the obtained results of this work related to drain current for producing performances devices that brings together the AC characteristics of HEMT with a weak drain current, which is important in the bioengineering domain. • ## Spin-dependent tunneling of light and heavy holes with electric and magnetic fields , Available online Abstract Full Text PDF The spin-dependent tunneling of light holes and heavy holes was analysed in a symmetrical heterostructure with externally applied electric and magnetic fields. The effects of the applied bias voltage, magnetic field and reverse bias were discussed for the polarization efficiency of light holes and heavy holes. The current density of spin-up and spin-down light holes increases as the bias voltage increases and reaches the saturation, whereas the current density of spin-up heavy holes is almost negligible. The applied bias voltage and the magnetic field highly influence the energy of resonance polarization, polarization efficiency, and the current density of heavy holes more than for the light holes. • ## Wet nitrogen oxidation technology and its anisotropy influence on VCSELs , Available online Abstract Full Text PDF Vertical cavity surface emitting lasers (VCSELs) are widely used in optical communications and optical interconnects due to their advantages of low threshold, low power consumption and so on. Wet nitrogen oxidation technology, which utilizes H2O molecules to oxidize the Al0.98Ga0.02As, is used for electrical and optical mode confinement. In this paper, the effects of oxidation time, oxidation temperature and oxidation anisotropy on the oxidation rate are explored and demonstrated. The ratio of oxidation rate on [0–11] to [011] crystal orientation is defined as oxidation anisotropy coefficient, which decreases with the increase of oxidation temperature and oxidation time. In order to analyze the effect of the oxidation anisotropy on the VCSEL performance, an oxide-aperture of the VCSELs with two difference shapes is designed and then fabricated. The static performance of these fabricated VCSELs has been measured, whose threshold current ratio ~ 0.714 is a good agreement with that of the theoretical calculation value ~ 0.785. Our research on wet nitrogen oxidation and its anisotropy serves as an important reference in the batch fabrication of large-area VCSELs. • ## Electrical properties of Si/Si bonded wafers based on an amorphous Ge interlayer , Available online Abstract Full Text PDF An amorphous Ge (a-Ge) intermediate layer is introduced into the Si bonded interface to lower the annealing temperature and achieve good electrical characteristics. The interface and electrical characteristics of n-Si/n-Si and p-Si/n-Si junctions manufactured by low-temperature wafer bonding based on a thin amorphous Ge are investigated. It is found that the bubble density tremendously decreases when the a-Ge film is not immersed in DI water. This is due to the decrease of the –OH groups. In addition, when the samples are annealed at 400 °C for 20 h, the bubbles totally disappear. This can be explained by the appearance of the polycrystalline Ge (absorption of H2) at the bonded interface. The junction resistance of the n-Si/n-Si bonded wafers decreases with the increase of the annealing temperature. This is consistent with the recrystallization of the a-Ge when high-temperature annealing is conducted. The carrier transport of the Si-based PN junction annealed at 350 °C is consistent with the trap-assisted tunneling model and that annealed at 400 °C is related to the carrier recombination model. • ## Investigation of the on-state behaviors of the variation of lateral width LDMOS device by simulation , Available online Abstract Full Text PDF In this paper, the main content revolves round the on-state characteristics of the variation of a lateral width (VLW) LDMOS device. A three-dimensional numerical analysis is performed to investigate the specific on-resistance of the VLW LDMOS device, the simulation results are in good agreement with the analytical calculation results combined with device dimensions. This provides a theoretical basis for the design of devices in the future. Then the self-heating effect of the VLW structure with a silicon-on-oxide (SOI) substrate is compared with that of a silicon carbide (SiC) substrate by 3D thermoelectric simulation. The electrical characteristic and temperature distribution indicate that taking into account the SiC as the substrate can mitigate the self- heating penalty effectively, alleviating the self heating effect and improving reliability. • ## Implementation of slow and smooth etching of GaN by inductively coupled plasma , Available online Abstract Full Text PDF Slow and smooth etching of gallium nitride (GaN) by BCl3/Cl2-based inductively coupled plasma (ICP) is investigated in this paper. The effects of etch parameters, including ICP power, radio frequency (RF) power, the flow rate of Cl2 and BCl3, on GaN etch rate and etch surface roughness RMS are discussed. A new model is suggested to explain the impact mechanism of the BCl3 flow rate on etch surface roughness. An optimized etch result of a slow and smooth etch surface was obtained; the etch rate and RMS were 0.36 Å/s and 0.9 nm, respectively. • ## 4-port digital isolator based on on-chip transformer , Available online Abstract Full Text PDF The design and fabrication results of a 4-port digital isolator based on an on-chip transformer for galvanic isolation are presented. An ON–OFF keying modulation scheme is used to transmit the digital signal. The proposed digital isolator is fabricated by the 0.18 μm CMOS process. A test chip can achieve a 1 MHz signal bandwidth, a 40 ns propagation delay, a 35.5 mW input power and a 50 mA drive output current. The proposed digital isolator is pin-compatible, of small volume and low power replacement for the common 4-port optocoupler. • ## A sample and hold circuit for pipelined ADC , Available online Abstract Full Text PDF A high performance sample-and-hold (S/H) circuit used in a pipelined analog-to-digital converter (ADC) is presented in this paper. Fully-differential capacitor flip-around architecture was used in this S/H circuit. A gain-boosted folded cascode operational transconductance amplifier (OTA) with a DC gain of 90 dB and a GBW of 738 MHz was designed. A low supply voltage bootstrapped switch was used to improve the linearity of the S/H circuit. With these techniques, the designed S/H circuit can reach 94 dB SFDR for a 48.9 MHz input frequency with 100 MS/s sampling rate. Measurement results of a 14-bit 100-MS/s pipeline ADC with designed S/H circuit are presented. • ## Impact of design and process variation on the fabrication of SiC diodes , Available online Abstract Full Text PDF In this paper we have studied the influence of design and process variations on the electrical performance of SiC Schottky diodes. On the design side, two design variations are used in the active cell of the diode (segment design and stripe design). In addition there are two more design variations employed for edge termination of the diodes, namely FLR and JTE. On the process side, some diodes have gone through the N2O annealing step. The segment design resulted in lower VF of diodes and the FLR design turned out to be a better choice for blocking voltages, in reverse bias conditions. N2O annealing has a detrimental effect on the diodes’ blocking performance with the JTE termination design: it degrades the blocking capability of the diodes significantly. • ## Optimization of erase time degradation in 65 nm NOR flash memory chips , Available online Abstract Full Text PDF Reliability issues of flash memory are becoming increasingly significant with the shrinking of technology nodes. Among them, erase time degradation is an issue that draws the attention of academic and industry researchers. In this paper, causes of the " erase time degradation” are exhaustively analyzed, with proposals for its improvement presented, including a low stress program/erase scheme with a staircase pulse and disturb-immune array bias condition. Implementation of the optimized circuit structure is verified in a 128 Mb SPI NOR Flash memory chip, which is fabricated on a SMIC 65 nm ETOX process platform. Testing results indicate a degradation of the sector erase time from 10.67 to 104.9 ms after 105 program/erase cycles, which exhibits an improvement of approximately 100 ms over conventional schemes. • ## Memory characteristics of microcavity dielectric barrier discharge , Available online Abstract Full Text PDF The nonlinear resistance characteristics of microcavity dielectric barrier discharge are mainly studied in the paper. A simulation model of microcavity dielectric barrier discharge is herein built to study the relationship between voltage and current in the process of discharge, and thus its I–V characteristic curve can be obtained. The I–V characteristics of the memristor are analyzed and compared with the I–V characteristics of the dielectric barrier discharge; it can be found that the I–V characteristics of the microcavity dielectric barrier discharge are similar to the characteristics of the memristor by analyzing them. The memory characteristics of microcavity dielectric barrier discharge are further analyzed. • ## Performance improvement of light-emitting diodes with double superlattices confinement layer , Available online Abstract Full Text PDF In this study, the effect of double superlattices on GaN-based blue light-emitting diodes (LEDs) is analyzed numerically. One of the superlattices is composed of InGaN/GaN, which is designed before the multiple quantum wells (MQWs). The other one is AlInGaN/AlGaN, which is inserted between the last QB (quantum barriers) and p-GaN. The crucial characteristics of double superlattices LEDs structure, including the energy band diagrams, carrier concentrations in the active region, light output power, internal quantum efficiency, respectively, were analyzed in detail. The simulation results suggest that compared with the conventional AlGaN electron-blocking layer (EBL) LED, the LED with double superlattices has better performance due to the enhancement of electron confinement and the increase of hole injection. The double superlattices can make it easier for the carriers tunneling to the MQWs, especially for the holes. Furthermore, the LED with the double superlattices can effectively suppress the electron overflow out of multiple quantum wells simultaneously. From the result, we argue that output power is enhanced dramatically, and the efficiency droop is substantially mitigated when the double superlattices are used. • ## Berger code based concurrent online self-testing of embedded processors , Available online Abstract Full Text PDF In this paper, we propose an approach to detect the temporary faults induced by an environmental phenomenon called single event upset (SEU). Berger code based self-checking checkers provides an online detection of faults in digital circuits as well as in memory arrays. In this work, a concurrent Berger code based online self- testable methodology is proposed and integrated in 32-bit DLX reduced instruction set computer (RISC) processor on a single silicon chip. The proposed methodology is implemented and verified for various arithmetic and logical operations of the DLX processor. The FPGA implementation of the proposed design shows that a meager increase in hardware utilization facilitates online self-testing to detect temporary faults. • ## High-performance pulse-width modulation AC/DC controller using novel under voltage lockout circuit according to Energy Star VI standard , Available online Abstract Full Text PDF This paper proposes a high-performance pulse-width modulation (PWM) AC/DC controller, which can drive a high-voltage (HV) 650-V power metal-oxide-semiconductor field-effect Transistor (MOSFET) in typical applications of adapters in portable electronic devices. In order to reduce the standby power consumption and improve the response speed in the start-up state, an improved under voltage lockout (UVLO) circuit without a voltage reference source or comparator is adopted. The AC/DC controller is fabricated using a 40-V 0.8-μm one-poly two-metal (1P2M) CMOS process, and it only occupies 1410 × 730 μm2. A 12 V/2 A flyback topology for quick-charge application is illustrated as the test circuit, which is currently one of the most advanced power adapters in use. Test values show that the turn-on and the turn-off threshold voltages are 19.318 and 8.01 V, respectively. A high hysteresis voltage of 11.308 V causes the value of the power-charging capacitor to decrease to as low as 1 μF to reduce production cost. In addition, the start-up current of 2.3 μA is extremely small, and is attributed to a reduction in the system's standby power consumption. The final test results of the overall system are proven to meet the Energy Star VI standard. The controller has already been mass produced for industrial applications. • ## Reliability testing of a 3D encapsulated VHF MEMS resonator , Available online Abstract Full Text PDF The frequency stability of a three-dimensional (3D) vacuum encapsulated very high frequency (VHF) disk resonator is systematically investigated. For eliminating the parasitic effect caused by the parasitic capacitance of the printed circuit board (PCB), a negating capacitive compensation method was developed. The testing results implemented at 25 °C for 240 h for the long-term stability indicates that the resonant frequency variation remained within ±1 ppm and the noise floor derived from Allan Deviation was 26 ppb, which is competitive with the conventional quartz resonators. The resonant frequency fluctuation of 1.5 ppm was obtained during 200 temperature cycling between −40 and 85 °C. • ## The developing condition analysis of semiconductor laser frequency stabilization technology , Available online Abstract Full Text PDF The frequency stability of free-running semiconductor lasers is influenced by several factors, such as driving current and external operating environment. The frequency stabilization of laser has become an international research hotspot in recent years. This paper reviews active frequency stabilization technologies of laser diodes and elaborates their principles. Based on differences of frequency discrimination curves, these active frequency stabilization technologies are classified into three major types, which are harmonic frequency stabilization, Pound-Drever-Hall (PDH) technology and curve subtraction frequency stabilization. Further, merits and demerits of each technology are compared from aspects of frequency stability and structure complexity. Finally, prospects of frequency stabilization technologies of semiconductor lasers are discussed in detail. Combining several of these methods are future trends, especially the combination of frequency stabilization of F–P cavity. And PID electronic control for optimizing the servo system is generally added in the methods mentioned above. • ## Impact of damping on high speed 850 nm VCSEL performance , Available online Abstract Full Text PDF High speed VCSELs are important optical devices in short-reach optical communication links and interconnects because of their low cost and high modulation speeds. In this paper, the impact of damping on the their static and dynamic characteristics is analyzed and demonstrated. Through the shallow corrosion of the top layer DBR, the VCSELs with different damping is designed and fabricated. With the increase of the surface etch depth from 0 to ~55 nm for 9 μm oxide-aperture VCSEL, the K factor related with the damping is reduced from 0.31 to 0.23 ns−1. When the etch depth of the VCSEL with 9 μm oxide-aperture is decreased to ~25 nm, output power is increased from 4.03 to 4.70 mW and small signal modulation bandwidth is also increased from 15.46 to 16.37 GHz. It shows that there is a tradeoff between damping and differential gain for improving modulation speed. • ## A high-efficiency charge pump in BCD process for implantable medical devices , Available online Abstract Full Text PDF This paper presents a high-efficiency charge pump circuit composed of cascaded cross-coupled voltage doublers implemented in an isolated bipolar-CMOS-DMOS (BCD) technology for implantable medical devices. Taking advantage of the transistor structures in the isolated BCD process, the leakage currents caused by the parasitic PNP transistors in the cross-coupled PMOS serial switches are eliminated by simply connecting the inside substrate terminal to the isolation terminal of each PMOS transistor. The simple circuit structure leads to small parasitic capacitance in the voltage doubler, which in turn ensures high efficiency of the overall charge pump. The proposed charge pump with 5 cascaded voltage doublers is fabricated in a 0.35-μm isolated BCD process. Measurement results with 2-V power supply, 1-MHz driving clock frequency and 40-μA current load show that an efficiency of 72.6% is achieved, and the output voltage can be pumped to about 11.5 V at zero load current. The chip area of the charge pump is 1.6 × 0.35 mm2. • ## FEM thermal analysis of high power GaN-on-diamond HEMTs , Available online Abstract Full Text PDF A three-dimensional thermal analysis of GaN HEMTs on diamond substrate is investigated using the finite element method. The diamond substrate thickness, area and shape, transition layer thickness and thermal conductivity of the transition layer are considered and treated appropriately in the numerical simulation. The temperature distribution and heat spreading paths are investigated under different conditions and the results indicate that the existence of the transition layer causes an increase in the channel temperature and the thickness, area and shape of the diamond substrate have certain impacts on the channel temperature too. Channel temperature reduces with increasing diamond substrate thickness and area but with a decreasing trend, which can be explained by the saturation effects of the diamond substrate. The shape of diamond substrate also affects the temperature performance of GaN HEMTs, therefore, to achieve a favorable heat dissipation effect with the settled diamond substrate area, the shape should contain as many isothermal curves as possible when the isothermal gradient is constant. The study of the thermal properties of GaN on diamond substrate is useful for the prediction of heating of high power GaN HEMTs devices and optimal designs of an efficient heat spreader for GaN HEMTs. • ## A 0.9 V PSRR improved voltage reference using a wide-band cascaded current mode differentiator , Available online Abstract Full Text PDF We present a voltage reference using a wide-band cascaded current mode differentiator, for the improved PSRR performance. Compared with the conventional references, the reference with the technique is mainly characterized by a two cascaded stages current mode signal differentiator. In the differentiator, a zero OTA Gm is proposed, to achieve the wide-band differential characteristic. With the technique, the PSRR beyond the pole’s corresponding frequency can be significantly improved with the minimum supply voltage only about VGS_PMOS + (VGS_NMOSVTH). Fabricated with a 0.18 μm CMOS process, with the 0.9 V supply voltage, the PSRR @ 20 MHz of the reference is achieved at −54 dB. Moreover, the power dissipation is 19 μW. • ## Research progress and challenges of two dimensional MoS2 field effect transistors , Available online Abstract Full Text PDF This review paper gives an outline of the recent research progress and challenges of 2D TMDs material MoS2 based device, that leads to an interesting path towards approaching the electronic applications due to its sizeable band gap. This review presents the improvement of MoS2 material as an alternate to a silicon channel in a transistor with its excellent energy band gap, thermal conductivity, and exclusive physical properties that are expected to draw attention to focusing on semiconducting devices for most futuristic applications. We discuss the band structure of MoS2 for a different number of layers with its structure, and various synthesis techniques of the MoS2 layer are also reviewed. The MoS2 based field effect transistor has attracted a great deal of attention due to its excellent properties such as mobility, on/off current ratio, and maximum on-current of the devices. The transition of mobility as a function of temperature and thickness dependence are also discussed. However, the mobility of MoS2 material is large in bulk form and lower in monolayer form. The use of a high-k gate dielectric in MoS2 FET is used to enhance the mobility of the device. Different metal contact engineering and different doping techniques were deployed to achieve low contact resistance. This review paper focuses on various aspects of layered TMDs material MoS2 based field effect transistors. • ## Ultralow specific ON-resistance high-k LDMOS with vertical field plate , Available online Abstract Full Text PDF An ultralow specific on-resistance high-k LDMOS with vertical field plate (VFP HK LDMOS) is proposed in this paper. The high-k dielectric trench and highly doped interface N+ layer are made in bulk silicon to reduce the surface field of the drift region in the VFP HK LDMOS. The gate vertical field plate (VFP) pinning in the high-k dielectric trench can modulate the bulk electric field. The high-k dielectric not only provides polarized charges to assist depletion of the drift region, so that the drift region and high-k trench maintain charge balance adaptively, but also can fully assist in depleting the drift region to increase the drift doping concentration and reshape the electric field to avoid premature breakdown. Compared with the conventional structure, the VFP HK LDMOS has the breakdown voltage of 629.1 V at the drift length of 40 μm and the specific on-resistance of 38.4 mΩ·cm2 at the gate potential of 15 V. Then the power figure of merit is 10.31 MW/cm2. • ## High order DBR GaSb based single longitude mode diode lasers at 2 μm wavelength , Available online Abstract Full Text PDF The GaSb-based distributed Bragg reflection (DBR) diode laser with 23rd-order gratings have been fabricated by conventional UV lithography and inductively coupled plasma (ICP) etching. The ICP etching conditions were optimized and the relationship among etching depth, duty ratio and side-mode suppression ratio (SMSR) was studied. The device with a ridge width of 100 μm, gratings period of 13 μm and etching depth of 1.55 μm as well as the duty ratio of 85% was fabricated, its maximum SMSR reached 22.52 dB with uncoated cavity facets under single longitudinal operation mode at room temperature. • ## Modeling of tunneling current density of GeC based double barrier multiple quantum well resonant tunneling diode , Available online Abstract Full Text PDF In this paper, the double barrier quantum well (DBQW) resonant tunneling diode (RTD) structure made of SiGeSn/GeC/SiGeSn alloys grown on Ge substrate is analyzed. The tensile strained Ge1−zCz on Si1−xyGexSny heterostructure provides a direct band gap type I configuration. The transmission coefficient and tunneling current density have been calculated considering single and multiple quantum wells. A comparative study of tunnelling current of the proposed structure is done with the existing RTD structure based on GeSn/SiGeSn DBH. A higher value of the current density for the proposed structure has been obtained. • ## Impact of varying carbon concentration in SiC S/D asymmetric dual-k spacer for high performance and reliable FinFET , Available online Abstract Full Text PDF In this paper, we propose a reliable asymmetric dual-k spacer with SiC source/drain (S/D) pocket as a stressor for a Si channel. This enhances the device performance in terms of electron mobility (eMobility), current driving capabilities, transconductance (Gm) and subthreshold slope (SS). The improved performance is an amalgamation of longitudinal tensile stress along the channel and reduced series resistance. We analysed the variation in drive current for different values of carbon (C) mole fraction y in Si1−yCy. It is found that the mole fraction also helps to improve device lifetime, performance enhancement also pointed by transconductance variation with the gate length. All the simulations are performed in the 3-D Sentaurus TCAD tool. The proposed device structure achieved ION = 2.17 mA/μm for Si0.3C0.7 and found that Si0.5C0.5 is more suitable for the perspective of a process variation effect for 14 nm as the gate length. We introduce reliability issues and their solutions for Si1−yCy FinFET for the first time. • ## Two dimensional analytical model for a negative capacitance double gate tunnel field effect transistor with ferroelectric gate dielectric , Available online Abstract Full Text PDF Analytical models are presented for a negative capacitance double-gate tunnel field-effect transistor (NC DG TFET) with a ferroelectric gate dielectric in this paper. The model accurately calculates the channel potential profile by solving the Poisson equation with the Landau–Khalatnikov (LK) equation. Moreover, the effects of the channel mobile charges on the potential are also taken into account. We also analyze the dependences of the channel potential and the on-state current on the device parameters by changing the thickness of ferroelectric layer, ferroelectric material and also verify the simulation results accord with commercial TCAD. The results show that the device can obtain better characteristics when the thickness of the ferroelectric layer is larger as it can reduce the shortest tunneling length. • ## Synthesis and characterization of poly (2,5-diyl pyrrole-2-pyrrolyl methine) semiconductor copolymer , Available online Abstract Full Text PDF In the current research, the proposed technique to synthesise poly {(2,5-diyl pyrrole) (2-pyrrolyl methine)} (PPPM) copolymer by condensation of pyrrole and pyrrole-2- carboxaldehyde monomers catalyzed by Maghnite-H+ is introduced. The protons are exchanged with Maghnite-H+, which is available in the form of a montmorillonite silicate clay sheet. The effect of several parameters such as time and temperature of copolymerization, [pyrrole]/[pyrrole-2-carboxaldehyde] molar ratio, amount of Maghnite-H+, and solvent on the produced poly (2,5-diyl pyrrole-2- pyrrolyl methine) semiconductor copolymer material (yield%) was investigated. The synthesized PPPM copolymer was characterized using nuclear magnetic resonance, Fourier transform infrared, and ultraviolet-visible spectroscopy. The results show that the synthesized copolymer using the copolymerization technique is a real organic copolymer consisting of two monomers units (i.e, pyrrole and pyrrole- 2-carboxaldehyde). Also, the synthesized copolymer is more soluble than polypyrrole in most of the commonly used organic solvents. Hence, copolymerization of pyrrole with pyrrole-2- carboxaldehyde will overcome the insolubility of polypyrrole. In addition, the resultant copolymer exhibits good film formability. The produced copolymer has several potential applications in the field of rechargeable batteries, sensors, capacitors, light emitting diodes, optical displays, and solar cells.
2018-08-15 13:11:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4935476779937744, "perplexity": 2923.7630559410686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210105.8/warc/CC-MAIN-20180815122304-20180815142304-00177.warc.gz"}
http://mathhelpforum.com/algebra/212260-factorising-print.html
# Factorising • January 30th 2013, 04:09 AM cxz7410123 Factorising does anybody know how this came from? thank you • January 30th 2013, 05:46 AM Shakarri Re: Factorising They use the fact that 5! = 5* 4! and 4!= 4*3! to change 8!/(4!*4!) into 8!/(4!*(4*3!)) and 8!/(5!*3!) into 8!/((5*4!)*3!) to get 4!3! in the denominator of both quantities then factorise 8!/(4!3!) • January 30th 2013, 08:21 AM pspss Re: Factorising for multiplicatin instead of (×), ( . ) is used 8!/4!.4! + 8!/5!.3! = 8!/4!.4! + 8!/4!.5.3! = 8!/4![1/4! + 1/5.3!] = 8!/4![1/3!.4 + 1/5.3!] = 8!/4!.3![1/4 + 1/5] ---- • January 30th 2013, 03:09 PM Prove It Re: Factorising Quote: Originally Posted by pspss for multiplicatin instead of (×), ( . ) is used 8!/4!.4! + 8!/5!.3! = 8!/4!.4! + 8!/4!.5.3! = 8!/4![1/4! + 1/5.3!] = 8!/4![1/3!.4 + 1/5.3!] = 8!/4!.3![1/4 + 1/5] ---- Actually, a (.) is used as a decimal point, it's actually a centred dot, \displaystyle \begin{align*} \cdot \end{align*} which is used for multiplication. Though a \displaystyle \begin{align*} \times \end{align*} is also acceptable as long as it doesn't get mixed up with any \displaystyle \begin{align*} x \end{align*} variables. • January 30th 2013, 05:53 PM topsquark Re: Factorising Quote: Originally Posted by cxz7410123 does anybody know how this came from? thank you The solution has already been given, but I thought I'd rewrite it a bit: $\frac{8!}{4! \cdot 4!} + \frac{8!}{5! \cdot 3!} = \frac{8!}{4! \cdot 3!} \left ( \frac{1}{1 \cdot 4} + \frac{1}{5 \cdot 1} \right )$ We have a forum that deals with how to write expressions. It's the LaTeX Help forum. It's pretty easy to learn the basics. :) -Dan
2014-03-14 14:22:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9990174174308777, "perplexity": 8980.281855437672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678693350/warc/CC-MAIN-20140313024453-00004-ip-10-183-142-35.ec2.internal.warc.gz"}
https://canadam.math.ca/2011/program/abs/gi2.html
CanaDAM 2011 University of Victoria, May 31 - June 3, 2011 www.cms.math.ca//2011 Geometric Representations of Graphs I Org: Stefan Felsner (Technische Universitaet Berlin) [PDF] MATTHEW FRANCIS, Department of Computer Science, University of Toronto On Segment graphs  [PDF] Segment graphs are the intersection graphs of line segments in the plane. It was conjectured by Kratochvil and Kubena that the complement of every planar graph is a segment graph. We show that the counter-examples, if any, to this conjecture have to be somewhat complex planar graphs, as the complement of every partial 2-tree is a segment graph. We also look at some open problems regarding segment graphs. JAN KRATOCHVIL, Charles University, Prague Intersection graphs of homothetic polygons  [PDF] Take a convex polygon and consider its homothetic copies in the plane, i.e. scaling and translation are allowed, no rotations. Several algorithmic, complexity, and structural results on intersection graphs of this type have recently been obtained. For instance the simplest case of a triangle results in the class of max-tolerance graphs, which has very recently been shown to contain all planar graphs. We will survey recent progress and discuss pertaining open problems. TOBIAS MÜLLER, Centrum Wiskunde en Informatica The smallest grid needed to represent a geometric intersection graph  [PDF] Spinrad [2003] asked for the smallest $k = k(n)$ such that every disk graph on $n$ vertices can be represented by disks whose radii and coordinates of the centers are at most $k$ in absolute value. We will show that $k = 2^{2^{\Theta(n)}}$. On the other hand any intersection graph of homothets/translates of a fixed polygon $P$ can be represented on a $2^{\Theta(n)} \times 2^{\Theta(n)}$-grid and this is sharp. TORSTEN UECKERDT, Technische Universitaet Berlin Edge-Intersection Graphs of Grid Paths - the Bend Number  [PDF] In 2007, Golumbic, Lipshteyn and Stern introduced \textit{edge-intersection graphs of grid paths}, EGP graphs for short. These can be viewed as intersection graphs of systems of intervals in the plane grid. This talk introduces EGP graphs, an associated parameter called the \textit{bend number}, its interrelations with the well-known interval number and track number, and a lot of open questions. SUE WHITESIDES, Computer Science Department, University of Victoria BC On Upward Topological Book Embeddings of Upward Planar Digraphs  [PDF] We study topological book embeddings (spine crossings allowed), point-set embeddings, and simultaneous embeddings of upward planar digraphs. Our approach is based on a linear time algorithm for computing an upward planar drawing of an upward planar digraph, with all vertices collinear. joint work with G. Giordano, G. Liotta, T. Mchedlidze, and A. Symvonis. Handling of online submissions has been provided by the CMS.
2021-10-23 21:53:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44113776087760925, "perplexity": 1211.1651554548112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585768.3/warc/CC-MAIN-20211023193319-20211023223319-00504.warc.gz"}
https://www.europeanpharmaceuticalreview.com/news/153045/global-mass-spectrometry-market-to-value-7-3-billion-by-2028/
news # Global mass spectrometry market to value $7.3 billion by 2028 4 SHARES Share via With roughly a fifth of the mass spectrometry market made up by pharma and biotech, growing R&D investment and technological breakthroughs in mass spectrometers will contribute to market growth. New market research suggests the global mass spectrometry market will grow from$4.48 billion in 2020 to a value of \$7.3 billion by 2028, owing to a compound annual growth rate (CAGR) of 6.24 percent. The pharma and biotech industries accounts for almost 20 percent on the market, the largest share, with other contributing markets including the food and beverage testing and environmental testing, among others. Mass spectrometry is an analytic method that measures the mass-to-charge ion ratio and is used to determine unknown components in the sample. The quantitative and qualitative analytical tool can assess complex mixtures in all phases of drug development, for instance, identification of the lead compound and its conformational details. Aside from its uses in research for functional genomics, metabolomics and proteomics, it is also used in many clinical areas (such as therapeutic drug monitoring, drugs of abuse and clinical toxicology), as well as in combination with other analytical techniques, eg, gas chromatography (GC) or liquid chromatography (LC). Clinical laboratories manage a large number of samples through total automation or analyser automation, which is offered by clinical mass spectrometers such as matrix-assisted laser desorption/ionization (MALDI-TOF). These automated platforms assist in the effective processing of increasingly large workloads. Thus, clinical laboratories are among the largest end-users in this market. ## Drivers and barriers to the mass spectrometry market According to the report, a driver of growth will be increasing demand for automation in diagnostic techniques, as well as the need for a cost-effective platform for sample analysis. This has motivated manufacturers to focus on product development and innovations. In addition, increasing funding in the pharmaceutical and biotechnology industry is expected to drive the growth of the mass spectrometry market. The R&D expenditure of pharmaceutical companies has increased significantly over the last two decades, with the 2018 EU Industrial R&D Investment Scoreboard estimating the pharmaceutical and biotechnology sector amounted to 18.9 percent of total global R&D expenditure. Mass spectrometry plays a key role in the pharmaceutical industry, from the early stages of drug discovery to late-stage development and clinical trials. Thus, increasing funding in the pharmaceutical and biotechnology industry is expected to drive the growth of the market. Moreover, technological breakthroughs in mass spectrometers are expected to drive market trends. Advance technologies such as ion mobility spectrometry and capillary electrophoresis are being used for the separation of complex biological mixtures, such as derived peptide products. Additionally, miniaturisation is expected to propel the growth of this market. However, the capital investment associated with the installation and maintenance of these devices is expected to restrain the growth of the global market. Mass spectrometers are largely unaffordable for small diagnostic clinics and laboratories, especially in emerging economies. Mass spectrometry is also labour-intensive and requires a skilled workforce to operate the devices, thus the shortage of skilled operators is expected to hamper market growth. ## Mass spectrometry by platform The market is separated into single mass spectrometry, hybrid mass spectrometry and others, with the hybrid mass spectrometry segment expected to witness the fastest growth between 2021 and 2028 (the forecast period). This is being driven by its advantages, such as rapid and high-resolution testing abilities with more accurate and precise results, increasing its adoption. ## Market by geography By geography, the global mass spectrometry market is dominated by North America, because of factors such as the growing funding for research and government initiatives in the US, widespread usage of mass spectrometry in the metabolomics and petroleum sector and CFI funding towards mass spectrometry projects in Canada. Additionally, regulatory agencies in the US, such as the Food and Drug Administration (FDA), are encouraging the use of analytical techniques to ensure that the pharmaceutical products released in the market adhere to quality requirements. Key players in the global mass spectrometry market are Shimadzu Corporation, Agilent Technologies, PerkinElmer, Dani Instruments, Thermo Fisher Scientific, Bruker, Leco Corporation, Waters Corporation, Sciex and Hiden Analytical. ### Related organisations This site uses Akismet to reduce spam. Learn how your comment data is processed. Send this to a friend
2022-01-20 14:06:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22175106406211853, "perplexity": 4017.032516111465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301863.7/warc/CC-MAIN-20220120130236-20220120160236-00611.warc.gz"}
https://community.wolfram.com/groups/-/m/t/1428840?sortMsg=Recent
# Would anyone like to test my package for crystallography? Posted 2 years ago 3025 Views | 4 Replies | 3 Total Likes | In the process of publishing my Mathematica package in a journal (Journal of Applied Crystallography), I need a couple of short reports from independent users – see the extract below. If you are submitting an article to the Computer Programs category, you should provide brief reports from two independent users (i.e. users who are not colleagues or close collaborators of the authors) that confirm the usefulness of the program and the adequacy of the documentation. These reports will be used only during the review process and will not be published. It is a package for doing basic crystallography-related operations in Mathematica. If anyone would like to try it out and play a little with the documentation to see that it works, and send a short report back to me, it would be much appreciated.You can get the package from GitHub.Edit: typo. I have found that the auto-complete code also works with version 10.0, as you say, but if I have Get["MaXrdCoreAutoComplete"] in the MaXrd/Kernel/init.m file and the package is loaded on startup with << MaXrd in the Mathematica/Kernel/init.m file, there will either be error messages or other functions will break. For instance, calling $MaXrdChangelog will give the error: LinkOpen::linke: Specified file is not a MathLink executable.. Regarding the glitches in the documentation: When building the documentation with Mathematica 11.3 it creates boxes such as TemplateBox[{6}, "Spacer1"] which seem to be new to that version. I have found that building the package/documentation with an earlier Mathematica version in Eclipse (through the Wolfram Engine Installations in Preferences), the result will be fine.Finally, it seems the lowest compatible version is 10.3. I had made use of functions such as StringContainsQ and UpTo which were “too new”.Thank you again for the useful feedback! Answer Posted 2 years ago This is what I see in 10.0 vs 11.3. I'm not saying you should support 10.0 (probably not worth it), just that it may be useful to indicate the minimum requirements. Answer Posted 2 years ago Thank you very much for the comment! Could you elaborate on which functions did not work in version 10.0 (alternatively, how you found out), and what was the glitching in the documentation?I will see to the other things you mentioned immediately. Answer Posted 2 years ago Just a small comment: it is marked as compatible with Mathematica 10.0+ in the PacletInfo file, however, it does not actually work with that version, and there are glitches in the display of the documentation. It would be nice to indicate a true compatibility version.A tip for your Kernel/init.m: it may be nicer not to hardcode the path as $UserBaseDirectory/Applications. What if e.g. a sysadmin wants to install it system-wide in $BaseDriectory? You can use the following directly: Get["MaXrdCoreDefinitions"] Get["MaXrdCoreAutoComplete"] The auto-completion code is made conditional on v11.3, but after a quick inspection, it seems it should work all the way back to v10.0. However, you may want to make it conditional on $Notebooks.It looks like a nice and polished package! Unfortunately, I don't have the domain expertise to write a report that would be acceptable for the journal.
2020-10-22 21:10:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3969099819660187, "perplexity": 1033.222868108328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880038.27/warc/CC-MAIN-20201022195658-20201022225658-00510.warc.gz"}
https://mvtrinh.wordpress.com/2011/10/12/dividing-prime-4/
## Dividing Prime Pick any prime number greater than three. Find one less than the square of that prime number. What is the greatest positive integer that must be a divisor of the result? Source: mathcontest.olemiss.edu 10/10/2011 SOLUTION Let $p$ be a prime number greater than three. The table below lists the value of $p^2-1=\left (p+1\right )\left (p-1\right )$ for some prime numbers. The table suggests that 24 is the greatest positive divisor of $p^2-1$. The presence of the two consecutive even factors $\left (p+1\right )$ and $\left (p-1\right )$ gives us ideas on how to prove the conjecture. Since $24=2\times 2\times 2\times 3$, we want to show that at least three 2s and one 3 come from the product $\left (p+1\right )\left (p-1\right )$ Idea 1 $2\times 2$ is easy to see because both $\left (p+1\right )$ and $\left (p-1\right )$ are even and each contributes one 2. Idea 2 If $a$ and $b$ are two consecutive even numbers, then 4 divides either $a$ or $b$. Proof If 4 divides $a$, we are done. If 4 does not divide $a$, then there exist a quotient $q$ and a remainder $r$ such that $a=4q+r$,    $r=1,2,3$ If $r=1$ or $r=3$ $a=4q+1$ or $a=4q+3$ This is not possible because in either case the right-hand side is an odd number and $a$ is even. If $r=2$ $a=4q+2$ $b=a+2$ $=\left (4q+2\right )+2$ $=4q+4$ $=4\left (q+1\right )$ The last equation shows that 4 divides $b$. DONE. Applying Idea 2 $\left (p+1\right )$ and $\left (p-1\right )$ are two consecutive even numbers, thus 4 divides either one of them. So we pick up an additional 2 to have $2\times 2\times 2$. Idea 3 If $a,b,c$ are three consecutive positive integers, then 3 divides one of them. Proof If 3 divides $a$, we are done. If 3 does not divide $a$, then there exist a quotient $q$ and a remainder $r$ such that $a=3q+r$,    $r=1,2$ If $r=1$ $a=3q+1$ $c=a+2$ $=\left (3q+1\right )+2$ $=3q+3$ $=3\left (q+1\right )$ The last equation shows that 3 divides $c$. If $r=2$ $a=3q+2$ $b=a+1$ $=\left (3q+2\right )+1$ $=3q+3$ $=3\left (q+1\right )$ The last equation shows that 3 divides $b$. DONE. Applying Idea 3 $\left (p-1\right ),p,\left (p+1\right )$ are three consecutive positive integers, thus 3 divides either $\left (p-1\right )$ or $\left (p+1\right )$ and not $p$ because it is a prime number. So we pick up a factor 3 to finally have $2\times 2\times 2\times 3$. Therefore, 24 is the greatest positive integer that divides $p^2-1$.
2017-08-20 00:17:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 62, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7902387976646423, "perplexity": 162.91880744117444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105955.66/warc/CC-MAIN-20170819235943-20170820015943-00534.warc.gz"}
https://economics.stackexchange.com/questions/15823/pareto-optimal-and-walrasian-equilibrium
# Pareto optimal and Walrasian equilibrium [closed] There are 100 units of good 1 and good 2 in a economy. Consumer 1 and consumer 2 have 50 units of each good. Consumer 1 only wants good 1 whereas consumer 2 only wants good 2. Note: Neither of these are lexicographic preferences. Question: Find Walrasian equilibria Here is what I have reached: Can someone explain why red lines are WE..and why not F and G? And P is Pareto optimal I understand, but is it Walrasian equilibrium as well? Is every Walrasian equilibrium Pareto optimal and every Pareto optimal point Walrasian equilibrium? ## closed as off-topic by Giskard, Oliv, Herr K., Bayesian, VicAcheMar 21 '17 at 8:50 This question appears to be off-topic. The users who voted to close gave this specific reason: We have two Consumers 1 and 2, and two goods 1 and 2 pure exchange economy. The following utility functions can be used to represent their preferences: • $u_1(x_{11}, x_{12}) = x_{11}$ • $u_2(x_{21}, x_{22}) = x_{22}$ Equilibrium price vector $(p_1, p_2=1)$ and allocation $((x_{11}, x_{12}), (x_{21}, x_{22}))$ satisfy the following: Optimality Conditions (Allocation must solve the utility maximization problem of the two consumers, i.e. it must lie on their demand functions) • $(x_{11}, x_{12}) = \left(\frac{50p_1 + 50}{p_1}, 0\right)$ • $(x_{21}, x_{22}) = \left(0, 50p_1 + 50\right)$ Market Clearing Conditions • $x_{11} + x_{21} = 100$ • $x_{12} + x_{22} = 100$ Solving the above gives price vector $(p_1, p_2) = (1, 1)$ that supports the allocation $((x_{11}, x_{12}), (x_{21}, x_{22})) = ((100, 0), (0, 100))$ in equilibrium. This allocation is the only competitive equilibrium. It is also the only Pareto efficient allocation in this economy.
2019-10-23 20:34:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8033742308616638, "perplexity": 2522.0622207028737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987836295.98/warc/CC-MAIN-20191023201520-20191023225020-00441.warc.gz"}
https://www.alivelearn.net/?cat=13
Xu Cui Xu Cui Xu Cui ## An interesting gamble The other day I was walking on a street, along which there are a lot of booths where people play games to gamble. I stopped in front of one booth. The host was warm and we started to talk. “How to play?”, I asked. “Well, simple.R Xu Cui ## VMWare Player: guest OS not full screen? VMWare Player is a great free tool if you want to run multiple operating systems on one computer. For example, you may have a Mac but need to run a few programs on Windows. Instead of purchasing a new Windows computer, you can simply use VMWare Playe Xu Cui Xu Cui ## Evolution of man Xu Cui What is the limit of this infinite exponential? Solution 1: $$x=\sqrt{2}^x$$ This leads to x=2 or 4 Solution 2: $$x=^x$$ This leads to x not equal to 2 or 4 Solution 3: MatLab simulation of series \sqrt{2}, \sqrt{2}^\sqrt{2}, [\s
2022-07-04 06:11:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2415153980255127, "perplexity": 2436.66518270647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104354651.73/warc/CC-MAIN-20220704050055-20220704080055-00230.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/dcds.2013.33.305
American Institute of Mathematical Sciences January  2013, 33(1): 305-320. doi: 10.3934/dcds.2013.33.305 Existence, regularity and boundary behaviour of bounded variation solutions of a one-dimensional capillarity equation 1 Dipartimento di Matematica e Geoscienze, Università degli Studi di Trieste, Via A. Valerio 12/1, 34127 Trieste, Italy, Italy Received  August 2011 Published  September 2012 We discuss existence and regularity of bounded variation solutions of the Dirichlet problem for the one-dimensional capillarity-type equation \begin{equation*} \Big( u'/{ \sqrt{1+{u'}^2}}\Big)' = f(t,u) \quad \hbox{ in } {]-r,r[}, \qquad u(-r)=a, \, u(r) = b. \end{equation*} We prove interior regularity of solutions and we obtain a precise description of their boundary behaviour. This is achieved by a direct and elementary approach that exploits the properties of the zero set of the right-hand side $f$ of the equation. Citation: Franco Obersnel, Pierpaolo Omari. Existence, regularity and boundary behaviour of bounded variation solutions of a one-dimensional capillarity equation. Discrete & Continuous Dynamical Systems, 2013, 33 (1) : 305-320. doi: 10.3934/dcds.2013.33.305 References: [1] G. Anzellotti, The Euler equation for functionals with linear growth, Trans. Amer. Math. Soc., 290 (1985), 483-501. doi: 10.1090/S0002-9947-1985-0792808-4.  Google Scholar [2] D. Bonheure, F. Obersnel and P. Omari, Heteroclinic solutions of the prescribed curvature equation with a double-well potential, preprint, (2011). Google Scholar [3] D. Bonheure, P. Habets, F. Obersnel and P. Omari, Classical and non-classical solutions of a prescribed curvature equation, J. Differential Equations, 243 (2007), 208-237. doi: 10.1016/j.jde.2007.05.031.  Google Scholar [4] D. Bonheure, P. Habets, F. Obersnel and P. Omari, Classical and non-classical positive solutions of a prescribed curvature equation with singularities, Rend. Istit. Mat. Univ. Trieste, 39 (2007), 63-85.  Google Scholar [5] G. Buttazzo, M. Giaquinta and S. Hildebrandt, "One-dimensional Variational Problems. An Introduction,'' Clarendon Press, Oxford, 1998.  Google Scholar [6] K. C. Chang, The spectrum of the 1-Laplace operator, Commun. Contemp. Math., 11 (2009), 865-894. doi: 10.1142/S0219199709003570.  Google Scholar [7] M. Emmer, Esistenza, unicità e regolarità nelle superfici di equilibrio nei capillari, Ann. Univ. Ferrara Sez. VII (N.S.), 18 (1973), 79-94.  Google Scholar [8] C. Gerhardt, Existence and regularity of capillary surfaces, Boll. Un. Mat. Ital. (4), 10 (1974), 317-335.  Google Scholar [9] C. Gerhardt, Existence, regularity, and boundary behavior of generalized surfaces of prescribed mean curvature, Math. Z., 139 (1974), 173-198. doi: 10.1007/BF01418314.  Google Scholar [10] M. Giaquinta, Regolarità delle superfici $BV$ con curvatura media assegnata, Boll. Un. Mat. Ital. (4), 8 (1973), 567-578.  Google Scholar [11] E. Giusti, "Minimal Surfaces and Functions of Bounded Variations," Birkhäuser, Basel, 1984.  Google Scholar [12] P. Habets and P. Omari, Multiple positive solutions of a one-dimensional prescribed mean curvature problem, Commun. Contemp. Math., 9 (2007), 701-730. doi: 10.1142/S0219199707002617.  Google Scholar [13] A. Hammerstein, Nichtlineare Integralgleichungen nebst Anwendungen, Acta Math., 54 (1930), 117-176. doi: 10.1007/BF02547519.  Google Scholar [14] V. K. Le, Some existence results on non-trivial solutions of the prescribed mean curvature equation, Adv. Nonlinear Stud., 5 (2005), 133-161. Google Scholar [15] V. K. Le, Variational method based on finite dimensional approximation in a generalized prescribed mean curvature problem, J. Differential Equations, 246 (2009), 3559-3578. doi: 10.1016/j.jde.2008.11.015.  Google Scholar [16] U. Massari, Esistenza e regolarità delle ipersuperficie di curvatura media assegnata in $\RR^n$, Arch. Rational Mech. Anal., 55 (1974), 357-382. doi: 10.1007/BF00250439.  Google Scholar [17] J. Mawhin, J. R. Ward Jr. and M. Willem, Variational methods and semilinear elliptic equations, Arch. Rational Mech. Anal., 95 (1986), 269-277. doi: 10.1007/BF00251362.  Google Scholar [18] A. Mellet and J. Vovelle, Existence and regularity of extremal solutions for a mean-curvature equation, J. Differential Equations, 249 (2010), 37-75. doi: 10.1016/j.jde.2010.03.026.  Google Scholar [19] M. Miranda, Dirichlet problem with L 1 data for the non-homogeneous minimal surface equation,, Indiana Univ. Math. J., 24 (): 227.  doi: 10.1512/iumj.1974.24.24020.  Google Scholar [20] F. Obersnel, Classical and non-classical sign changing solutions of a one-dimensional autonomous prescribed curvature equation, Adv. Nonlinear Stud., 7 (2007), 1-13.  Google Scholar [21] F. Obersnel and P. Omari, Existence and multiplicity results for the prescribed mean curvature equation via lower and upper solutions, Differential Integral Equations, 22 (2009), 853-880.  Google Scholar [22] F. Obersnel and P. Omari, Positive solutions of the Dirichlet problem for the prescribed mean curvature equation, J. Differential Equations, 249 (2010), 1674-1725. doi: 10.1016/j.jde.2010.07.001.  Google Scholar [23] F. Obersnel, P. Omari and S. Rivetti, Existence, regularity and stability properties of periodic solutions of a capillarity equation in the presence of lower and upper solutions, Nonlinear Anal. Real World Appl., 13 (2012), 2830-2852. doi: 10.1016/j.nonrwa.2012.04.012.  Google Scholar [24] L. Schwartz, Les théorèmes de Whitney sur les fonctions différentiables, Séminaire Bourbaki, Soc. Math. France, Paris, 1 (1995), 355-363.  Google Scholar [25] J. Serrin, The problem of Dirichlet for quasilinear elliptic differential equations with many independent variables, Philos. Trans. Roy. Soc. London Ser. A, 264 (1969), 413-496. doi: 10.1098/rsta.1969.0033.  Google Scholar show all references References: [1] G. Anzellotti, The Euler equation for functionals with linear growth, Trans. Amer. Math. Soc., 290 (1985), 483-501. doi: 10.1090/S0002-9947-1985-0792808-4.  Google Scholar [2] D. Bonheure, F. Obersnel and P. Omari, Heteroclinic solutions of the prescribed curvature equation with a double-well potential, preprint, (2011). Google Scholar [3] D. Bonheure, P. Habets, F. Obersnel and P. Omari, Classical and non-classical solutions of a prescribed curvature equation, J. Differential Equations, 243 (2007), 208-237. doi: 10.1016/j.jde.2007.05.031.  Google Scholar [4] D. Bonheure, P. Habets, F. Obersnel and P. Omari, Classical and non-classical positive solutions of a prescribed curvature equation with singularities, Rend. Istit. Mat. Univ. Trieste, 39 (2007), 63-85.  Google Scholar [5] G. Buttazzo, M. Giaquinta and S. Hildebrandt, "One-dimensional Variational Problems. An Introduction,'' Clarendon Press, Oxford, 1998.  Google Scholar [6] K. C. Chang, The spectrum of the 1-Laplace operator, Commun. Contemp. Math., 11 (2009), 865-894. doi: 10.1142/S0219199709003570.  Google Scholar [7] M. Emmer, Esistenza, unicità e regolarità nelle superfici di equilibrio nei capillari, Ann. Univ. Ferrara Sez. VII (N.S.), 18 (1973), 79-94.  Google Scholar [8] C. Gerhardt, Existence and regularity of capillary surfaces, Boll. Un. Mat. Ital. (4), 10 (1974), 317-335.  Google Scholar [9] C. Gerhardt, Existence, regularity, and boundary behavior of generalized surfaces of prescribed mean curvature, Math. Z., 139 (1974), 173-198. doi: 10.1007/BF01418314.  Google Scholar [10] M. Giaquinta, Regolarità delle superfici $BV$ con curvatura media assegnata, Boll. Un. Mat. Ital. (4), 8 (1973), 567-578.  Google Scholar [11] E. Giusti, "Minimal Surfaces and Functions of Bounded Variations," Birkhäuser, Basel, 1984.  Google Scholar [12] P. Habets and P. Omari, Multiple positive solutions of a one-dimensional prescribed mean curvature problem, Commun. Contemp. Math., 9 (2007), 701-730. doi: 10.1142/S0219199707002617.  Google Scholar [13] A. Hammerstein, Nichtlineare Integralgleichungen nebst Anwendungen, Acta Math., 54 (1930), 117-176. doi: 10.1007/BF02547519.  Google Scholar [14] V. K. Le, Some existence results on non-trivial solutions of the prescribed mean curvature equation, Adv. Nonlinear Stud., 5 (2005), 133-161. Google Scholar [15] V. K. Le, Variational method based on finite dimensional approximation in a generalized prescribed mean curvature problem, J. Differential Equations, 246 (2009), 3559-3578. doi: 10.1016/j.jde.2008.11.015.  Google Scholar [16] U. Massari, Esistenza e regolarità delle ipersuperficie di curvatura media assegnata in $\RR^n$, Arch. Rational Mech. Anal., 55 (1974), 357-382. doi: 10.1007/BF00250439.  Google Scholar [17] J. Mawhin, J. R. Ward Jr. and M. Willem, Variational methods and semilinear elliptic equations, Arch. Rational Mech. Anal., 95 (1986), 269-277. doi: 10.1007/BF00251362.  Google Scholar [18] A. Mellet and J. Vovelle, Existence and regularity of extremal solutions for a mean-curvature equation, J. Differential Equations, 249 (2010), 37-75. doi: 10.1016/j.jde.2010.03.026.  Google Scholar [19] M. Miranda, Dirichlet problem with L 1 data for the non-homogeneous minimal surface equation,, Indiana Univ. Math. J., 24 (): 227.  doi: 10.1512/iumj.1974.24.24020.  Google Scholar [20] F. Obersnel, Classical and non-classical sign changing solutions of a one-dimensional autonomous prescribed curvature equation, Adv. Nonlinear Stud., 7 (2007), 1-13.  Google Scholar [21] F. Obersnel and P. Omari, Existence and multiplicity results for the prescribed mean curvature equation via lower and upper solutions, Differential Integral Equations, 22 (2009), 853-880.  Google Scholar [22] F. Obersnel and P. Omari, Positive solutions of the Dirichlet problem for the prescribed mean curvature equation, J. Differential Equations, 249 (2010), 1674-1725. doi: 10.1016/j.jde.2010.07.001.  Google Scholar [23] F. Obersnel, P. Omari and S. Rivetti, Existence, regularity and stability properties of periodic solutions of a capillarity equation in the presence of lower and upper solutions, Nonlinear Anal. Real World Appl., 13 (2012), 2830-2852. doi: 10.1016/j.nonrwa.2012.04.012.  Google Scholar [24] L. Schwartz, Les théorèmes de Whitney sur les fonctions différentiables, Séminaire Bourbaki, Soc. Math. France, Paris, 1 (1995), 355-363.  Google Scholar [25] J. Serrin, The problem of Dirichlet for quasilinear elliptic differential equations with many independent variables, Philos. Trans. Roy. Soc. London Ser. A, 264 (1969), 413-496. doi: 10.1098/rsta.1969.0033.  Google Scholar [1] Piotr Kowalski. The existence of a solution for Dirichlet boundary value problem for a Duffing type differential inclusion. Discrete & Continuous Dynamical Systems - B, 2014, 19 (8) : 2569-2580. doi: 10.3934/dcdsb.2014.19.2569 [2] Alain Hertzog, Antoine Mondoloni. Existence of a weak solution for a quasilinear wave equation with boundary condition. Communications on Pure & Applied Analysis, 2002, 1 (2) : 191-219. doi: 10.3934/cpaa.2002.1.191 [3] Zhiming Guo, Zhi-Chun Yang, Xingfu Zou. Existence and uniqueness of positive solution to a non-local differential equation with homogeneous Dirichlet boundary condition---A non-monotone case. Communications on Pure & Applied Analysis, 2012, 11 (5) : 1825-1838. doi: 10.3934/cpaa.2012.11.1825 [4] Franco Obersnel, Pierpaolo Omari. Multiple bounded variation solutions of a capillarity problem. Conference Publications, 2011, 2011 (Special) : 1129-1137. doi: 10.3934/proc.2011.2011.1129 [5] Yukihiko Nakata. Existence of a period two solution of a delay differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1103-1110. doi: 10.3934/dcdss.2020392 [6] Nguyen Thi Hoai. Asymptotic approximation to a solution of a singularly perturbed linear-quadratic optimal control problem with second-order linear ordinary differential equation of state variable. Numerical Algebra, Control & Optimization, 2021, 11 (4) : 495-512. doi: 10.3934/naco.2020040 [7] Út V. Lê. Regularity of the solution of a nonlinear wave equation. Communications on Pure & Applied Analysis, 2010, 9 (4) : 1099-1115. doi: 10.3934/cpaa.2010.9.1099 [8] Gökçe Dİlek Küçük, Gabil Yagub, Ercan Çelİk. On the existence and uniqueness of the solution of an optimal control problem for Schrödinger equation. Discrete & Continuous Dynamical Systems - S, 2019, 12 (3) : 503-512. doi: 10.3934/dcdss.2019033 [9] Yu-Feng Sun, Zheng Zeng, Jie Song. Quasilinear iterative method for the boundary value problem of nonlinear fractional differential equation. Numerical Algebra, Control & Optimization, 2020, 10 (2) : 157-164. doi: 10.3934/naco.2019045 [10] Shaoyong Lai, Yong Hong Wu, Xu Yang. The global solution of an initial boundary value problem for the damped Boussinesq equation. Communications on Pure & Applied Analysis, 2004, 3 (2) : 319-328. doi: 10.3934/cpaa.2004.3.319 [11] Kin Ming Hui, Jinwan Park. Asymptotic behaviour of singular solution of the fast diffusion equation in the punctured euclidean space. Discrete & Continuous Dynamical Systems, 2021, 41 (11) : 5473-5508. doi: 10.3934/dcds.2021085 [12] Kim-Ngan Le, William McLean, Martin Stynes. Existence, uniqueness and regularity of the solution of the time-fractional Fokker–Planck equation with general forcing. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2765-2787. doi: 10.3934/cpaa.2019124 [13] Daniel G. Alfaro Vigo, Amaury C. Álvarez, Grigori Chapiro, Galina C. García, Carlos G. Moreira. Solving the inverse problem for an ordinary differential equation using conjugation. Journal of Computational Dynamics, 2020, 7 (2) : 183-208. doi: 10.3934/jcd.2020008 [14] Xiang-Dong Fang. A positive solution for an asymptotically cubic quasilinear Schrödinger equation. Communications on Pure & Applied Analysis, 2019, 18 (1) : 51-64. doi: 10.3934/cpaa.2019004 [15] Shaoyong Lai, Yong Hong Wu. The asymptotic solution of the Cauchy problem for a generalized Boussinesq equation. Discrete & Continuous Dynamical Systems - B, 2003, 3 (3) : 401-408. doi: 10.3934/dcdsb.2003.3.401 [16] Irina Astashova, Josef Diblík, Evgeniya Korobko. Existence of a solution of discrete Emden-Fowler equation caused by continuous equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (12) : 4159-4178. doi: 10.3934/dcdss.2021133 [17] Yalçin Sarol, Frederi Viens. Time regularity of the evolution solution to fractional stochastic heat equation. Discrete & Continuous Dynamical Systems - B, 2006, 6 (4) : 895-910. doi: 10.3934/dcdsb.2006.6.895 [18] Iasson Karafyllis, Lars Grüne. Feedback stabilization methods for the numerical solution of ordinary differential equations. Discrete & Continuous Dynamical Systems - B, 2011, 16 (1) : 283-317. doi: 10.3934/dcdsb.2011.16.283 [19] Defei Zhang, Ping He. Functional solution about stochastic differential equation driven by $G$-Brownian motion. Discrete & Continuous Dynamical Systems - B, 2015, 20 (1) : 281-293. doi: 10.3934/dcdsb.2015.20.281 [20] Juan Dávila, Louis Dupaigne, Marcelo Montenegro. The extremal solution of a boundary reaction problem. Communications on Pure & Applied Analysis, 2008, 7 (4) : 795-817. doi: 10.3934/cpaa.2008.7.795 2020 Impact Factor: 1.392
2021-12-07 11:41:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8580026626586914, "perplexity": 4350.042375988693}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363376.49/warc/CC-MAIN-20211207105847-20211207135847-00359.warc.gz"}
http://mathhelpforum.com/algebra/16725-one-more-equation.html
1. ## One more equation I'm not sure how to begin attacking this one. Given the equation $4x^2 - 4xy + 1 - y^2 = 0$ use the Quadratic Formula to solve for (a) x in terms of y (b) y in terms of x. Can I get a hint? 2. Originally Posted by earachefl I'm not sure how to begin attacking this one. Given the equation $4x^2 - 4xy + 1 - y^2 = 0$ use the Quadratic Formula to solve for (a) x in terms of y (b) y in terms of x. Can I get a hint? Hint: $4x^2 +x(-4y)+(1-y^2)=0$. So what is $a,b,c=?$ 3. Hmm... I get that I need to somehow isolate the variables on one side of the equation. I just can't seem to figure out how. It seems like every way I attack it, x and y stay mixed up together. Are you saying to take $(-4y)$ and $(1-y^2)$and group them together somehow? 4. He's saying identify a b and c. Remember a quadratic is in the form $(a)x^2\,+\,(b)x\,+\,(c)$ 5. That was my first guess as to what he meant, and I did plug those numbers (-1, -4, 1) into the Quadratic Formula, and got the answer $-2 \pm \sqrt 5$. However, this is nowhere near close to the book's answers of $y = -2x \pm \sqrt{8x^2 +1}$ or $x = \frac {y \pm \sqrt{2y^2 - 1}}{2}$ so I'm no closer in understanding. Am I supposed to plug the output of the Quadratic Formula in wherever y appears? 6. Originally Posted by earachefl That was my first guess as to what he meant, and I did plug those numbers (-1, -4, 1) into the Quadratic Formula, and got the answer $-2 \pm \sqrt 5$. However, this is nowhere near close to the book's answers of $y = -2x \pm \sqrt{8x^2 +1}$ or $x = \frac {y \pm \sqrt{2y^2 - 1}}{2}$ so I'm no closer in understanding. Am I supposed to plug the output of the Quadratic Formula in wherever y appears? where did -1, -4, 1 come from? let's try this hint thing one more time. i will give you a blatant one. For (a): $4x^2 - 4xy + 1 - y^2 = 0$ $\Rightarrow 4x^2 + (-4y)x + \left( 1 - y^2 \right) = 0$ Here, $a = 4 \mbox { , } b = -4y \mbox { , } c = 1 - y^2$ For (b): $-y^2 + (-4x)y + 4x^2 + 1 = 0$ Here, $a = -1 \mbox { , } b = -4x \mbox { , } c = 4x^2 + 1$ Can you continue now? 7. Thanks to all for the hints. My textbook hasn't given any example of this kind of problem, but just dumped the question in our laps regardless. 8. Originally Posted by earachefl Thanks to all for the hints. My textbook hasn't given any example of this kind of problem, but just dumped the question in our laps regardless. did you get the correct solutions?
2017-01-17 16:29:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7778536677360535, "perplexity": 488.2480253738749}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00316-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.vedantu.com/chemistry/alum
# Alum ## What is Alum? Alum is an inorganic compound composed of Water molecules, Aluminum, other Metal than Aluminium, and Sulphates. Alum is a double salt in the hydrated form. Alum exists in different forms: Potash Alum, Soda Alum, ammonium Alum, and chrome Alum. The general chemical formula for Alum is XAl(SO4)2·12H2O. ### Definition We can define Alum as an inorganic compound that is made up of Water molecules, Sulphates and Aluminium. It is a hydrated form of a double salt. This compound exists in different forms like Potash Alum, Ammonium Alum, Chrome Alum, Soda Alum. Alum has a general chemical formula, it is XAl(SO4)2·12H2O. ### Types There are different types of Alum including: Alum Common Name Chemical Formula Chemical Name Appearance Molar Mass 1. Potash Alum Potassium Alum; Fitkari KAl(SO4)2·12H2O Potassium Aluminium Sulphate It exists in a white crystal form and smells like Metallic Water. 258.192g/mol 2. Soda Alum Sodium Alum NaAl(SO4)2·12H2O Sodium Aluminium Sulphate It exists in a white crystal form and smells like Metallic Water. 458.28g/mol 3. Ammonium Alum Ammonium Sulphate Alum NH3Al(SO4)2·12H2O Ammonium Aluminium Sulphate It exists in a white crystal form and smells like Metallic Water. 132.14g/mol 4. Chrome Alum Chromium Alum KCr(SO4)2·12H2O Chromium Aluminium Sulphate It exists in a purple crystal form and smells like Metallic Water. 283.22g/mol 5. Selenate Alum Selenium Alum Al2O12Se3 Selenium Aluminium Sulphate It exists in a white crystal form and smells like Metallic Water. 482.9g/mol 1. Potash Alum - It is also known as Potassium Alum. The common name of Potash Alum is fitkari. The chemical formula for Potash Alum is KAl(SO4)2·12H2O. Its chemical name is Potassium Aluminium Sulphate. The common Alum is Potash Alum or Potassium Alum. The molar mass of the Potash Alum is 258.192 g/mol. It exists in a white crystal form. It smells like Metallic Water. It is also commonly known as white Alum. 1. Soda Alum - It is also known as sodium Alum. Its common name is SAS. The chemical formula for sodium Alum is NaAl(SO4)2·12H2O. Its chemical name is sodium Aluminium Sulphate. The molar mass of the sodium Alum is 458.28 g/mol. It exists in a white crystal form. It smells like Metallic Water. 1. Ammonium Alum - It is also known as ammonium Sulphate Alum. Its common name is AAS. The chemical formula for Potash Alum is NH3Al(SO4)2·12H2O. Its chemical name is ammonium Aluminium Sulphate. The molar mass of the ammonium Alum is 132.14 g/mol. It exists in a white crystal form. It smells like Metallic Water. 1. Chrome Alum -  It is also known as chromium Alum. Its common abbreviation for chromium Alum is CAS. The chemical formula for Potash Alum is KCr(SO4)2·12H2O. Its chemical name is chromium Aluminium Sulphate. The molar mass of the chromium Alum is 283.22 g/mol. It exists in a purple crystal form. It smells like Metallic Water. 1. Selenate Alum - In this type of the Alum selenium takes the place of the sulphur. The anion present in this form of Alum is selenate in place of Sulphate. These Alums possess the property of strong oxidizing agents. The molecular formula for selenate Alum is Al2O12Se3. The molecular mass of the selenate Alum is 482.9 g/mol. Alum is available both offline and online. Alum offline is generally available at grocery shops and medical stores. Alum Online is available on different e-commerce websites. Alums are generally sold by their common names like white fitkari, red fitkari, and white Alum stones or red Alum stones. Alums possess the property of anti-inflammatory agents. It is used in the gargling process to reduce teeth gum inflammation and pain. ### Properties of Alum • Alums are highly soluble in Water. • These compounds are sweet in taste. • They generally crystallize in the regular octahedral form. • The Alum crystals get liquified when heated. • Alums generally exist in the form of a white and transparent crystalline form. • Its boiling point is around 200 degrees celsius. • Its melting point is 92.5degrees celsius. • Its density is 1.725 g/cm3. • They are highly soluble in Water. • They are sweet in taste. • Their density is 1.725g/cm3 • Their melting point is 92.5° Celsius. • Their boiling point is around 200° Celsius. • They generally crystallize in a regular octahedral form. • The Alum crystals liquify when they are heated. ### Alum Water Treatment Alum Water treatment is generally carried to treat the polluted Water. These compounds act as a coagulant. It is used in the Coagulation-Flocculation process of polluted Water. It is a chemical Water treatment technique typically applied prior to sedimentation and filtration to enhance the ability of a treatment process to remove particles. Coagulation - it destabilizes the charges of the particles. Coagulants with charges opposite to those of the suspended solids are added to the Water to neutralize the negative charges on dispersed non-settable solids such as clay and organic substances. $Al_{2}(SO_{4})_{3}.18H_{2}O + 6HCO^{3-} \rightarrow 2Al(OH)_{3} + 6CO_{2} +18H_{2} + 3SO_{4}^{-2}$ Once the charge is neutralized, the small-suspended particles are capable of sticking together. The slightly larger particles formed through this process are called microflocs and are still too small to be visible to the naked eye. We use Alum to treat the polluted Water mainly. These compounds can act as a coagulant and can be used in the Coagulation-Flocculation process to treat contaminated Water. In other words, it is a chemical Water treatment technique done before the sedimentation and filtration process to strengthen the proficiency of the treatment and its ability to remove polluted and dirty particles. We used two terms in the explanation of this treatment, they are: Coagulation: This is the destabilization performed on the charges of the particles. Coagulants with charges that are opposite to that of the suspended solids are added to the Water. We stabilize and neutralize the negative charges in the non-settable solids like clay and other organic substances. Flocculation: This is the process where a chemical coagulant is added to the Water to stimulate bonding between different types of particles. This leads to the creation of larger aggregates that can separate easily. This is the chemical equation for them: $Al_{2}(SO_{4})_{3}.18H_{2}O + 6HCO^{3-} \rightarrow 2Al(OH)_{3} + 6CO_{2} +18H_{2} + 3SO_{4}^{-2}$ Once this charge is neutralized, it allows the minuscule and suspended particles to stick together, whereas the slightly larger particles (microflocs) are still too small for the naked eye to see. ### Uses of Alum • It is used in the pickling and baking process. • It is used in the tanning process of leather. • It is used in the Coagulation and Flocculation process of Water treatment. • It is used as an acidulating agent in cooking. • It is used as a drying agent in a textile company. • It is used as an antiseptic agent. • We use it in the baking and picking process. • We use it for the Flocculation and Coagulation process during the Water treatment. • We use it as a drying agent in textiles companies. • We use it as an antiseptic agent. • We use it as an acidulating agent in the cooking process. • We had it to tan leather. ## FAQs on Alum 1. What is an alum? Alum is an inorganic compound composed of water molecules, aluminium, other metal than aluminium, and sulphates. Alum is a double salt present in the hydrated form. The general chemical formula for alum is XAl(SO4)2·12H2O. 2. Write the properties of the alum. The properties of the alum are given below: 1. Alums are highly soluble in water. 2. These compounds are sweet in taste. 3. They generally crystallize in the regular octahedral form. 4. The alum crystals get liquified when heated. 5. Alums generally exist in the form of a white and transparent crystalline form. 3. What is Phitkari? Phitkari is a common name that is used for the chemical compound which we use for medicinal purposes and other treatment processes. It has a chemical formula of KAl(SO4)2·12H2O. It is a mixed salt of a compound with Potassium Sulphate and Aluminium Sulphate which is being hydrated by Water molecules. This compound is widely known as Potash Alum or Potassium or chemically known as Potassium Sulphate Dodecahydrate. We used to extract it from alunite minerals and nowadays we can produce it industrially. 4. Can we separate Alum from a solution? We can separate an Alum from a solution using the process of Electrodialysis. Electrodialysis is the process of separating charged membranes and electrical membranes to separate ionic particles and other uncharged components from an aqueous solution. Electrodialysis (ED) has the ability to do this separation, in other words it removes salt ions and charged organic matter using an ion exchange membrane to purify brackish Water. The major advantage of conducting this process before filtration is that it doesn't use pressure and can do the task with less energy consumption. 5. What are the uses of Potash Alum? Potash Alum is used for various purposes in real life: • Fire Retardant: Potash Alum adds flame resistance to clothes, wood, and other materials. • Gourmet Food: It has a specific acidic composition perfect for use in baking powder; bakeries in England use this Alum in their bread-making process. • Dissolving Iron and Steel: It is used to dissolve Iron and Steel to create castings of various machinery. • Lake Pigmentation: It acts as a base for the pigmentation of most of the lakes. 6. What is the difference between Coagulation and Flocculation? The primary definition of both Coagulation and Flocculation is not so evident in the industry that we can differentiate them properly. But we can compare their functions to differentiate between them: Coagulation is the process of creating particles that aggregate by themselves without any aid. For example, influencing the particles and changing their pH. Flocculation is the process of creating particles that aggregate with the help of polymers that binds them together. Even if we differentiate between these terms, we usually use them hand in hand as they are both used together to purify brackish Water or waste Water. Coagulation is also known in regular life, a well-known example is Sour Milk, when milk gets sour its pH decreases, which causes destabilization of fat particles which then coagulate.
2022-09-30 02:22:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42467057704925537, "perplexity": 5788.130886006067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00462.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=135&t=59553&p=224962
Delta H $\Delta G^{\circ}= \Delta H^{\circ} - T \Delta S^{\circ}$ $\Delta G^{\circ}= -RT\ln K$ $\Delta G^{\circ}= \sum \Delta G_{f}^{\circ}(products) - \sum \Delta G_{f}^{\circ}(reactants)$ Rafsan Rana 1A Posts: 55 Joined: Sat Aug 24, 2019 12:16 am Delta H I forgot what Dr. Lavelle said in lecture, but it was something about delta H or the change in enthalpy doesn't change even if temperature changes? Can someone explain exactly what he said and what it means? Ashley Wang 4G Posts: 103 Joined: Wed Sep 11, 2019 12:16 am Re: Delta H I think this is from using Van't Hoff's equation to calculate K at a different temperature if ∆Hº is known. We assume the difference in entropy ∆Sº between the reactants and products in a reaction is the same even when the reaction occurs at two different temperatures. Even though the actual entropy values will be different at different T's, the change between them is assumed to be the same and thus not temperature-dependent. The same is assumed for ∆Hº. This allows us to get the equation ln(K2/K1) = ∆Hº/R (1/T2 - 1/T1) from Van't Hoff's equation. Hope this is helpful! Sorry the equation looks like a mess in this layout.
2020-07-09 11:18:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8822022676467896, "perplexity": 1323.1370407052593}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655899931.31/warc/CC-MAIN-20200709100539-20200709130539-00505.warc.gz"}
https://genomics.sschmeier.com/ngs-tools/index.html
# 2. Tool installation¶ ## 2.1. Install the conda package manager¶ We will use the package/tool managing system conda to install some programs that we will use during the course. It is not installed by default, thus we need to install it first to be able to use it. # download latest conda installer curl -O https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh # run the installer bash Miniconda3-latest-Linux-x86_64.sh # delete the installer after successful run rm Miniconda3-latest-Linux-x86_64.sh Note ### 2.1.1. Update .bashrc and .zshrc config-files¶ Before we are able to use conda we need to tell our shell where it can find the program. We add the right path to the conda installation to our shell config files: echo 'export PATH="/home/manager/miniconda3/bin:$PATH"' >> ~/.bashrc echo 'export PATH="/home/manager/miniconda3/bin:$PATH"' >> ~/.zshrc Attention The above assumes that your username is “manager”, which is the default on a Biolinux install. Replace “manager” with your actual username. Find out with whoami. So what is actually happening here? We are appending a line to a file (either .bashrc or .zshrc). If we are starting a new command-line shell, either file gets executed first (depending on which shell you are using, either bash or zsh shells). What this line does, is to put permanently the directory ~/miniconda3/bin first on your PATH variable. The PATH variable contains directories in which our computer looks for installed programs, one directory after the other until the program you requested is found (or not, then it will complain). Through the addition of the above line we make sure that the program conda can be found anytime we open a new shell. Close shell/terminal, re-open new shell/terminal. Now, we should be able to use the conda command: conda update conda ### 2.1.2. Installing conda channels to make tools available¶ Different tools are packaged in what conda calls channels. We need to add some channels to make the bioinformatics and genomics tools available for installation: # Install some conda channels # A channel is where conda looks for packages ## 2.2. Create environments¶ We create a conda environment for some tools. This is useful to work reproducible as we can easily re-create the tool-set with the same version numbers later on. conda create -n ngs python=3 # activate the environment conda activate ngs So what is happening when you type conda activate ngs in a shell. The PATH variable (mentioned above) gets temporarily manipulated and set to: $conda activate ngs # Lets look at the content of the PATH variable (ngs)$ echo \$PATH /home/manager/miniconda3/envs/ngs/bin:/home/manager/miniconda3/bin:/usr/local/bin: ... Now it will look first in your environment’s bin directory but afterwards in the general conda bin (/home/manager/miniconda3/bin). So basically everything you install generally with conda (without being in an environment) is also available to you but gets overshadowed if a similar program is in /home/manager/miniconda3/envs/ngs/bin and you are in the ngs environment. ## 2.3. Install software¶ To install software into the activated environment, one uses the command conda install. # install more tools into the environment conda install package Note To tell if you are in the correct conda environment, look at the command-prompt. Do you see the name of the environment in round brackets at the very beginning of the prompt, e.g. (ngs)? If not, activate the ngs environment with conda activate ngs before installing the tools. ## 2.4. General conda commands¶ # to search for packages conda search [package] # To update all packages conda update --all --yes # List all packages installed conda list [-n env] # conda list environments conda env list # create new env conda create -n [name] package [package] ... # activate env conda activate [name] # deavtivate env conda deactivate
2018-11-12 19:02:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29562896490097046, "perplexity": 9326.84536857222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741016.16/warc/CC-MAIN-20181112172845-20181112194845-00180.warc.gz"}
http://www.physicsforums.com/showthread.php?p=3354849
## When p(A)=0 iff p(B)=0 for any polynomial,why same minimal polynomial? For two matrices A and B, when p(A)=0 iff p(B)=0 for any polynomial, what will happen? i read that A and B have the same minimal polynomial, why? PhysOrg.com science news on PhysOrg.com >> 'Whodunnit' of Irish potato famine solved>> The mammoth's lament: Study shows how cosmic impact sparked devastating climate change>> Curiosity Mars rover drills second rock target Quote by td21 For two matrices A and B, when p(A)=0 iff p(B)=0 for any polynomial, what will happen? i read that A and B have the same minimal polynomial, why? Let $$m_A, m_B$$ be the minimal polynimials of A and B. Then $$m_A (A) = 0\Rightarrow m_A (B) = 0 \Rightarrow m_B / m_A^{(1)}$$ and $$m_B (B) = 0\Rightarrow m_B (B) = 0 \Rightarrow m_A / m_B ^{(2)}$$ $$\overset {(1), (2)}{\Rightarrow} m_A = k \cdot m_B$$ with k a constant. But $$m_A, m_B$$ are both monic polynomials so, $$k=1$$ and finally $$m_A = m_B.$$ Recognitions: Gold Member Science Advisor Staff Emeritus Pretty much the same thing but in slightly differentwords: Suppose PA(x), of degree n, is the minimal polynomial for A. Then PA(A)= 0 so PA(B)= 0. If This is not the minimal polynomial for B, there exist a polynomial PB, of degree m< n, such that PB(A)= 0. But then PB(A)= 0 contradicting the fact that the mininal polynomial of A has degree n> m.
2013-05-21 06:12:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7067699432373047, "perplexity": 3052.661228351798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699730479/warc/CC-MAIN-20130516102210-00054-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/particular-integral-help-equating-terms.329670/
# Particular Integral - Help equating terms 1. Aug 8, 2009 ### _Greg_ 1. The problem statement, all variables and given/known data Ok, so I have a 2nd order differential equation, I can get the complimentary function no problem, its getting numerical values for terms in the particular integral that I can't do. 2. Relevant equations y'' - y' - 2y = t2 3. The attempt at a solution Complimentiary function: y(t) = Ae2t + Be-t All fine and dandy, now particular interal for t2: ypi(t) = At2 + Bt + C Now we find the first and second order derivatives: First: 2At + B Second: 2A Now substituting these terms back into original equation: 2A - (2At + B) - 2(At2 + Bt + C) = t2 This is where I'm stuck, I'm looking at my notes for the next bit: We can find A, B and C by equating terms, so: t2: -2A -1 t: -2A - 2B = 0 1: 2A - b - 2C = 0 I don't understand that at all, can someone explain that a bit further? Last edited: Aug 8, 2009 2. Aug 8, 2009 ### Pengwuino If you group up the terms, you have: (2A-B-2C)+(-2A-2B)t+(-2A)$$t^2$$ = $$t^2$$. You can also say you have (2A-B-2c)+(-2A-2B)t+(-2A)$$t^2$$ = 1$$t^2$$ + 0t + 0. For that equation to be true, each set of coefficients must be equal, that is -2A = 1, -2A-2B = 0 aka 2A=-2B, and 2A-B-2C = 0. Three equations, three unknowns, you can solve for your particular solution. 3. Aug 9, 2009 ### _Greg_ Cheers Pengwuino, I think I get the general idea, so separate t2 values from t's then everything else, so: 2A - (2At + B) - 2(At2 + Bt + C) = t2 Breaking down brackets: 2A - 2At - B -2At2 - 2Bt - 2C = t2 (-2A)t2 + (-2A -2B)t + (2A - B - 2C) = 1 + 0 + 0 Therefore: (-2A) = 1 (-2A -2B) = 0 (2A - B - 2C) = 0 So, A = -1/2 B = 1/2 C = 3/4 God this is sooo much to remember, find roots, use the correct general formula for the roots from memory, remember what form of particular integral to use, calculate 1st and 2nd derivatives, plug back into equation, break down and combine the complimentary function with the particular integral. Think i better get some practice! E2A: I'v got a list of particular integral formats for different functions. This particular question I'm doing I though I better check, would I be right saying that for f(x) = x + 6 ------> y = Cx + D ? 4. Aug 9, 2009 ### Pengwuino Do you mean you have a new problem where the inhomogeneous part is f(x) = x + 6? If so, yes, that is the particular solution in most cases. Remember, though, if your homogeneous solution has x or a constant as a solution, you need to tweak your attempt. Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
2017-08-22 10:07:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.652367353439331, "perplexity": 1625.579551035248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110573.77/warc/CC-MAIN-20170822085147-20170822105147-00143.warc.gz"}
https://en.wikipedia.org/wiki/Wikipedia:RD/Science
# Wikipedia:Reference desk/Science (Redirected from Wikipedia:RD/Science) Welcome to the science reference desk. Main page: Help searching Wikipedia How can I get my question answered? • Provide a short header that gives the general topic of the question. • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when. • Post your question to only one desk. • Don't post personal contact information – it will be removed. All answers will be provided here. • Specific questions, that are likely to produce reliable sources, will tend to get clearer answers, • Note: • We don't answer (and may remove) questions that require medical diagnosis or legal advice. • We don't answer requests for opinions, predictions or debate. • We are not a substitute for actually doing any original research required, or as a free source of ideas. How do I answer a question? Main page: Wikipedia:Reference desk/Guidelines Choose a topic: Computing desk Entertainment desk Humanities desk Language desk Mathematics desk Science desk Miscellaneous desk Archives # September 16 ## Lesch–Nyhan syndrome Lesch–Nyhan syndrome causes a mess of problems: the patient cannot produce a specific enzyme, produces too much uric acid, experiences intellectual disability severe enough to prevent speech and ambulation, and is susceptible to a raftload of self-harming behaviors. I get the impression that the enzyme absence causes the overproduction of uric acid somehow, but what causes the rest? The article seems to suggest that the hyperuricemia is responsible for everything, yet that seems unlikely (why would too much of a toxic acid cause compulsive self-harming?) — but then I've looked over the "Pathophysiology" section without understanding much of anything after the first three paragraphs. Nyttend (talk) 03:35, 16 September 2017 (UTC) From the article: "The etiology of the neurological abnormalities remains unknown." Ruslik_Zero 08:13, 16 September 2017 (UTC) Nonetheless, it might be useful to research purine autism, which also involves hyperuricemia. Purine autism is interesting in that it is a kind of autism said to respond to allopurinol treatment, though I would have expected much more confirmation of that by now if it were reliable. [1] Note that autism can also cause self-harm activity and is mentioned in this regard in our Lesch-Nyhan article. Wnt (talk) 11:32, 16 September 2017 (UTC) Hm, I didn't realise that "etiology" meant "origin" or "cause". Nyttend (talk) 12:33, 16 September 2017 (UTC) ## Aquo complex versus hydrated compound Is there a clear demarcation between an aquo complex and a hydrated compound; or does it simp.ly depend on context; or does it depend on the particular author? I am not talking about water of crystallization when I say 'hydrated compound'. Case in point: tetraaquocopper(2+) sulfate versus copper(2+) sulfate tetrahydrate. Does it have anything to do with the degree of lability of the aquo ligands, or perhaps whether the complex is homoleptic? Plasmic Physics (talk) 07:03, 16 September 2017 (UTC) At the level of one of my freshman chemistry textbooks (General Chemistry by Ralph H. Petrucci 5th edition 1985, p.911) and this website [2] (Washington University at St. Louis), both would appear to be non-standard names for coordination complexes, although the first is close. Standard nomenclature would be tetraaquacopper(II) sulfate, indicating copper 2+ coordinated to 4 waters and not coordinated to the sulfate counter ion. "Copper(2+) sulfate tetrahydrate" would mean CuSO4.4H2O to me, which would be an example of water of crystallization. An example of where you have seen this would be helpful, but using CuSO4.4H2O terminology would be necessary in instructions for a copper sulfate solution preparation to get the right mass of starting reagent for the desired concentration of copper sulfate.--Wikimedes (talk) 17:45, 16 September 2017 (UTC) Using the ionic charge instead of the oxidation state when constructing additive names seems to be the preffered alternative convention according to the latest IUPAC nomenclature conventions. From what I've read, when 'hydrate' appears in the name, it can refer to either an aquo complex or w.o.c. introducing vagueness. From your your concluding remark, I gather that which one to choose, depends more on the context? If that is the case, what defines the context in which either use would be appropriate? Plasmic Physics (talk) 23:28, 16 September 2017 (UTC) Standard nomenclature seems unambiguous, and you probably can't go wrong by choosing the standard way of saying or writing things. (2+ vs. II or aquo vs. aqua wouldn't impede understanding and is just a tangential nitpick.) One of my fields is solid state chemistry; I'm not up on common variations of standard nomenclature in coordination chemistry and can't recall much anecdotally.--Wikimedes (talk) 02:29, 17 September 2017 (UTC) (My appologies, it appears it is actually supposed to be 'aqua'). Solid-state chemistry? Interesting... How can you tell expermentally or theorhetically, if an atom is bound to a neighbouring atom in a bulk phase? Plasmic Physics (talk) 03:03, 17 September 2017 (UTC) Experimentally, X-ray crystallography would be choice for crystalline compounds, and can give some information about amorphous solids as well. There are also the related neutron and electron diffraction. Solid-state NMR gives information on the proximities of NMR-active nuclei. According to table 3.1, p.48 of Solid State Chemistry and its Applications by Anthony R. West 1984, most types of spectroscopy give information on local structure, though you would have to do some research to find out which technique is best used for a particular situation. Theoretically, according to p.20 of the January 2017 edition of the MRS Bulletin, "... (molecular dynamics) simulations are routinely used to model the structure of materials with steadily increasing accuracy".--Wikimedes (talk) 18:58, 17 September 2017 (UTC) ## Breaking boards How much force does it take to perform a power break on an unpegged stack of 4 standard 1-inch-thick boards made from seasoned pine, dimensions 12x6 inches, grain along the 6-inch edge? How much more force is needed if the stack is increased to 6 boards with all the other parameters remaining the same? (This is NOT homework -- this is me trying to gauge my own strength!) 2601:646:8E01:7E0B:3DB7:8D6E:A762:14CC (talk) 08:02, 16 September 2017 (UTC) You don't break them with force (at least, not in karate), you use impulse to do it. This is the product of force and time (strictly, their integral). There are two impulses under consideration here, the impulse given to the moving hand by the body, and the impulse the moving hand then transfers to the boards. Much of karate training is about increasing the impulse given to the hand, in the short time available. Considering human muscle generally, we're "strong but slow" - a mediocre performer in pure strength sports can produce more force than most karetaka can, but they can't deliver anything like the same impulse. The ultimate demonstration of this being of course the "inch punch" techniques. This also indicates why an empty hand technique is limited (for pure striking effect) against anything with a weighted glove, or a kick (legs are heavier than arms). Human muscles are better at loading impulse into things slightly heavier than a single hand. Then there's the matter of transferring the impulse into the board. It takes a certain amount of energy to fracture a brittle material like dry cross-grain timber, so six boards will need at least ${\displaystyle 6/4}$ as much as four. But, one must also couple the impulse of the hand effectively into the boards. This is very difficult to model as a physical process, because it's fast and the materials (mostly hands) are flexible, so they turn the energy of the moving hand into compressed and displaced flesh, rather than bending (and breaking) the board. This is the second matter of karate training: which part to strike with and how to keep the hand rigid enough. Then there's the physics of breaking boards. The easiest boards to break are a stack of moderately thin boards, already touching at the impact point. It is hard to break the same boards with spacers at their edges. If they're spaced apart for visibility, they sometimes have a 'coupler' in the middle too. It's very hard - soon impossible - to break a single board of the same thickness. Also don't demonstrate against a carpenter who gets to choose which boards to use, because there's visible and selectable variations in their strength. Boards are broken by exceeding their strain, not their stress. It's not applying a force greater than their ability to resist, it's bending the board past their linear ability to extend (why the materials broken are chosen for being brittle, i.e. having a poor resistance to strain, even if strong against a stressing force). If the boards are too thick, you can't bend them enough to achieve this strain - you may put a Herculean dent into the top, but they don't snap. If they're too thin, they become simply flexible and all you will do is bend them down and have them spring back afterwards - this is some of the theory behind composite materials. If the boards are in a spaced stack, then there is considerable bending in the upper boards (absorbing your impulse), but there isn't enough left for the lower boards to break them. Bending the upper boards downwards to touch the next board absorbs energy and this energy is lost to the strike when the board breaks (it turns into sound or an acceleration of the board halves starting to move independently). So a spaced stack of boards will require an impulse that increases more than the number of boards. Exactly how much more is tricky to work out, as it depends on many factors - such as the relative energy needed to bend a board down vs. the energy to break it. A stiff board in a spaced stack (such as tiles) is very hard to break. Boards and tiles are broken for demonstrations because they demonstrate the effect of impulse over force. To show simple force strength (and not highlight the effects of speed), then punching a swinging bag and seeing how far it moved would show this better - and the karetaka would probably lose to Indian wrestlers. Andy Dingley (talk) 09:39, 16 September 2017 (UTC) I am suspicious you are thinking of something else besides impulse here. The impulse is force x time, and causes an acceleration of the target object. In other words, the swinging bag would be a perfect measurement of impulse. What you are describing seems more like simple speed, or perhaps some other quality, that allows the board to be pulled far off center before it has much of a chance to push back on the hand i.e. to apply its impulse to the hand. But I know nothing of how to break multiple boards with a hand, so I am likely wrong. Wnt (talk) 11:39, 16 September 2017 (UTC) Would it be rate of transfer of impulse (= rate of change of momentum) that is important here? Dbfirs 12:05, 16 September 2017 (UTC) And if so, that would be the force! (F*t)/t=F 2601:646:8E01:7E0B:3DB7:8D6E:A762:14CC (talk) 12:17, 16 September 2017 (UTC) So it would! I've never tried to break boards (at least not seriously). Presumably, one needs both a large impulse and a fast rate of transfer (speed of strike; force) to achieve the effect. There is some discussion here but I'm not convinced by the approximations in the mathematics. Dbfirs 12:27, 16 September 2017 (UTC) The link loops back to Wikipedia: Breaking (martial arts) for a source. Our article there has some data (also shows some signs of personal stress, like the uppercase link to SCIENCE). I am still not convinced it is very simple. For one thing, the displacement of the board depends on the force, but force is a tricky thing. Suppose you contrast my hand, modelled by a heavy sausage wrapped in confectionary marshmallow, versus the hand of a martial artist, modelled by the sausage wrapped in horn. Well, if I had a rocket booster handy to get my hand up to the same speed as the artist, the impulse would be the same. But the force wouldn't be the same because the whole mass of the marshmallow has to squish up (OUCH!) before all the momentum is transferred; but the horny hand of the artist presumably is more rigid and conducts that impulse rapidly upon the first few millimeters of contact (and impulse/time = force). Bent off center more rapidly by the increased force, perhaps the board breaks sooner, and therefore has much less time to transfer impulse back to the hand of the artist. So how much is speed, needed to build up the hand's impulse, and how much is rigidity, to deliver it? And what else have I forgotten? Wnt (talk) 18:07, 16 September 2017 (UTC) So, anyone find an approximate number value for the force involved? 2601:646:8E01:7E0B:3DB7:8D6E:A762:14CC (talk) 07:44, 18 September 2017 (UTC) Force is the wrong measure. Andy Dingley (talk) 10:13, 20 September 2017 (UTC) ## Shortest Nobel Prize-winning paper Today I learned that Watson & Crick's paper announcing their discovery (or at least, their part in the discovery) of the structure of DNA is a mere 834 words - exceptional brevity for such a significant finding. Is that a record? I appreciate Nobel Prizes aren't for papers but for research generally, but using published papers which described Nobel-winning research as a metric has anyone ever bettered 834 words? 51.9.138.245 (talk) 10:10, 16 September 2017 (UTC) Watson & Crick's paper was so short largely because they were in such a hurry. Much of the background work was already known. Pauling had already published one postulated structure, which was largely held to be unworkable. They thought that others (i.e. Pauling) would reach their own conclusions for themselves if given much more time. Watson & Crick had two ideas, the double helix (rather than a triple structure) and also the "zipper" idea for replication. They wanted to be Wallace, not Darwin. Their paper wasn't publishing the results of long years of research and careful study (that was mostly out there already, or was filled out later) it was throwing two wild ideas out to claim clear priority on them, even if they turned out to be wrong later. The specific paper is a brilliant hypothesis, not careful research. Andy Dingley (talk) 10:36, 16 September 2017 (UTC) One of the justifications for Franklin not being mentioned on this paper has been that the paper was based on the two "wild guesses" that were so urgent to make publicly visible and so to claim precedence for. Franklin's work had been the painstaking careful research that led up to this. Sadly and IMHO wrongly, the Nobel Prize was awarded for the narrow paper, and overlooked Franklin's contribution to making it possible to have those ideas. Andy Dingley (talk) 12:42, 16 September 2017 (UTC) I fully agree that Franklin should have been given more credit by the scientific establishment, but it's difficult to see how the Nobel Committee itself could have: Nobel Prizes are never awarded posthumously, and Franklin had died in 1958, 4 years before the award to Watson, Crick and Maurice Wilkins. Watson himself suggested that, had she lived, she might also have shared the award. Perhaps those three should themselves have made more of her contribution, which Franklin may never have been fully aware of, as Wilkins had shown her critical 1952 photo to W & C in 1953 without her knowledge, and after she herself had moved on to a different college and different areas of research. {The poster formerly known as 87.81.230.195} 90.200.137.12 (talk) 16:06, 16 September 2017 (UTC) Well her also sharing the award with Watson, Crick and Wilkins, if she had lived was surely about as likely as her getting the award after she died, i.e. really unlikely. Prize rules limiting a single prize to 3 people would have required either there were 2 awards in two separate categories or years, or there was something weird like the Randall X-ray diffraction lab. Mind you, I'm not sure anything but the Nobel Peace Price can be award to an organisation. Our article suggests so but the cited source doesn't really seem to say this and the closest I can find on the Nobel site is that in some places they mention the Nobel Peace Price has been awarded to organisations etc. Anyway, if they had wanted to award to an organisation in memory of her, I don't see that her death really stopped that. And if they were going to award 2 prizes, they could have simply said no living person deserved the second, although that may seem a little weird since more likely no living person and Wilkins should share the second prize. Mind you the no 3 people and posthumous rules [3] don't really seem to come from Nobel's will anyway [4]. Also although that clearly says the award is divided equally, among the recipients, [5] suggests it isn't always. I guess most likely this arises when 2 separate works receive an award, in which case the one work with 2 recipients gets 1/2 share which is divided among the 2 people, and the other gets 1/2 share which goes to that single awardeee. Nil Einne (talk) 17:48, 16 September 2017 (UTC) My assumption is that Franklin would have been jointly awarded the Prize instead of Wilkins. Both of them had been working on DNA crystallography, and Franklin had been recruited by their mutual boss John Randall to either collaborate with Watkins (as he believed) or to take the work over and forward it (as she believed – Randall's poor management of them led to their misunderstanding). Franklin's improved data was crucial to Watson and Crick, but with her dead, it was not unreasonable to include Watkins as the "next biggest player" in the discovery. {The poster formerly known as 87.81.230.195} 90.200.137.12 (talk) 19:46, 17 September 2017 (UTC) Here is the paper, and as you see, Franklin is acknowledged (as well as Wilkins). Possibly. However your original comment said also without any mention of excluding any recipients so I was replying to that. Nil Einne (talk) 05:06, 18 September 2017 (UTC) Acknowledgement isn't an authoring credit though. Andy Dingley (talk) 10:16, 20 September 2017 (UTC) • The assumption I've heard is that if Franklin had survived and been able to promote her work (and that of her doctoral student Raymond Gosling), there would probably have been two prizes: Watson and Crick would have won the Nobel Prize in Physiology (for working out the reproduction mechanism of DNA) and Wilkins, Franklin (and maybe Gosling) would have won the Nobel Prize in Chemistry or maybe Physics (for pioneering x-ray crystallography methods that uncovered the structure of DNA). Smurrayinchester 10:18, 20 September 2017 (UTC) Conway famously published a paper with the title "can n^2+1 unit equilateral triangles cover an equilateral triangle of side > n, say n + e" with the substantive body of "n^2 + 2 can" accompanied by two diagrams. — Preceding unsigned comment added by 2A01:E34:EF5E:4640:35DC:A78A:A81D:9CA4 (talk) 15:32, 16 September 2017 (UTC) The question was about Nobel Prizes, though. --69.159.60.147 (talk) 22:14, 17 September 2017 (UTC) ## Infinity Everybody thing at the infinity at the space in expansion. For my the description of the Infinity is the ( The wave of the Big-Bang forever increase ) Sorry but I don't have any background in science ,i this possible the somebody told me if is possible For my ( fiery ). Sin cerement J-M.A — Preceding unsigned comment added by 70.79.181.150 (talk) 14:57, 16 September 2017 (UTC) Yes, there are many infinities. There's infinitely large, infinitely small, infinitely forward in time, and infinitely backwards in time, for example. You might also be interested in the infinite worlds hypothesis. StuRat (talk) 15:06, 16 September 2017 (UTC) Some people regard "infinitely small" as a contradiction in terms. The usual term is Infinitesimal. Mathematically, there is a hierarchy of infinities (see Stu's link above). For example, there are more real numbers than there are whole numbers, but the number of fractions is the same as the number of whole numbers. Dbfirs 16:40, 16 September 2017 (UTC) But no matter how you slice it, "infinity" is not a quantity, it is not a number. ←Baseball Bugs What's up, Doc? carrots→ 18:33, 16 September 2017 (UTC) It's actually a cardinality. Dbfirs 19:36, 16 September 2017 (UTC) Yes, there are different "cardinalities" of infinity. Hence the terms "countably" infinite vs. "uncountably" infinite. But infinity is infinity. As Carl Sagan said, no matter how large a number you can imagine, you are no closer to infinity than is the number 1. ←Baseball Bugs What's up, Doc? carrots→ 03:58, 17 September 2017 (UTC) Did you read the article on cardinality? Dbfirs 06:25, 17 September 2017 (UTC)cardinality The term "infinitesimal", which is used to mean "infinitely small", is essentially a way of saying "infiniteth", as compared with "tenth" or "hundredth" or "thousandth".[6]Baseball Bugs What's up, Doc? carrots→ 04:07, 17 September 2017 (UTC) See also our Shape of the universe#Infinite or finite. -- ToE 17:46, 16 September 2017 (UTC) Nothing is infinite (with one exception). Everything is by design or definition infact finite as a concept of science. The only exeption is the literal Nothing in Sense of empty space which is the only "thing" that can be regarded as infinite without braking our laws of physics. --Kharon (talk) 03:21, 17 September 2017 (UTC) Or breaking. But braking works too. :) And if space is "finite but unbounded" then your premise remains true. ←Baseball Bugs What's up, Doc? carrots→ 04:00, 17 September 2017 (UTC) The metric expansion of space should be braking, but isn't, so the Americans are apparently the only ones to put the brakes on something metric. :-) StuRat (talk) 04:29, 17 September 2017 (UTC) Ofcourse i meant Breaking. Im German but my englisch was always very, very good. No idea how that got so wrong in my head - or out of to be more precise. I promise I'll take more care for my writing. --Kharon (talk) 05:06, 18 September 2017 (UTC) I'll give you a break and assume your misspelling and improper capitalization of "English" was intentional. :-) StuRat (talk) 16:49, 18 September 2017 (UTC) What's the diff between a straight-A student, her teacher, and Ford Proving Grounds ? One breaks the curve on a test, the next curves the test on a break, and the last tests the brakes on a curve. StuRat (talk) 16:43, 18 September 2017 (UTC) ## What does D. Œ. A. V. stand for? In working on the history of glaciology, I'm finding references to "D. Œ. A. V.", such as this: "Le succès des sondages de 1899 encourageant les plus grands espoirs pour la réussite du levé complet par la même voie, d'un profil transversal du glacier, le Comité central du D. Œ. A. V., qui avait subventionné les premiers travaux, consentit avec la plus louable munificence à faire les frais d'une nouvelle campagne de sondages dans ce but." which Google translates as "The success of the surveys of 1899, which encouraged the greatest hopes for the success of the complete survey by the same route, of a transverse profile of the glacier, the central committee of the D. Œ. A. V., which had subsidized the first works, consented with the most laudable munificence to pay the cost of a new survey campaign for this purpose." I also see this abbreviation in some old citations. Can anyone tell me what it stands for? The "Œ" quite probably stands for "Österreich" or some variation, since we're talking about the Alps; the A might derive from the Alps. Googling the abbreviation and searching Google Scholar for the cited papers hasn't gotten me anywhere. Any other way to find out? Mike Christie (talk - contribs - library) 16:24, 16 September 2017 (UTC) Possibly Deutscher und Österreichischer Alpenverein. Cheers  hugarheimur 16:46, 16 September 2017 (UTC) That's sure to be it. Thank you very much! Mike Christie (talk - contribs - library) 16:50, 16 September 2017 (UTC) D. Œ. A. V. and D.Œ.A.V. now created as redirects. Nyttend (talk) 23:24, 17 September 2017 (UTC) ## Atmospheric pressure My Thermodynamics professor drew a free-body diagram for a lid being lifted by boiling water in the pot below. Applying downward force is gravity and atmospheric pressure, he said, while vapor pressure provides the lift. But doesn't the air inside the pot provide a lift as well due to an equal amount of atmospheric pressure as the air above? Thank you. Imagine Reason (talk) 16:49, 16 September 2017 (UTC) It could be that your professor is assuming that water vapor has displaced all the air in the pot. Your professor is the best authority on the assumptions s\he has made in his/her model; this would be a good question to ask him/her.--Wikimedes (talk) 17:05, 16 September 2017 (UTC) Until the lid is lifted, the gas underneath it should be at atmospheric pressure plus the water vapor pressure. So yes, the atmospheric pressure on both sides of the lid cancels out. StuRat (talk) 17:13, 16 September 2017 (UTC) Since the sides of the pot are not involved in the scenario, should I assume that the lid has been lifted? In that case, will the vapor have completely displaced all the air? Thank you. Imagine Reason (talk) 23:13, 16 September 2017 (UTC) After a little more thought, I see it this way: If the water is boiling, it means that the gas above the water in the pot is at (or less than) the vapor pressure of water for the temperature of the water. On the stove top, heat is put in until the water temperature rises to the point where this pressure starts to push off the lid. This upward pressure is equal to the downward pressure on the lid caused by gravity acting on the lid and the external gas pressure on the lid, which is atmospheric pressure. Another way to look at it is that the gas inside the pot is isolated from the atmosphere outside the pot (and the weight of the ~100km column of air that causes atmospheric pressure), so there's no reason to expect atmospheric pressure to be a factor inside the pot.--Wikimedes (talk) 00:42, 17 September 2017 (UTC) The air is not displaced. Infact it will not be displaced but to the contrary remain and get saturated (with steam) in dependence of its temperature and pressure, in an multicomponent (Water and Air (Nitrogen, Oxygen Argon Carbon dioxide etc.)) Vapor–liquid equilibrium commonly named and known as "Steam" (See Steam T-s-diagram in added picture on the right). --Kharon (talk) 03:00, 17 September 2017 (UTC) • Ask the professor, but a possible simplification would start with the lid directly atop the water with no air under it, and with no air dissolved in the water. As it happens, in real life before boiling starts there will be air under the lid and air dissolved in the water, but the total amount of air is small in proportion to the amount of steam that is generated, so its effect is mathematically negligible in a sufficiently large container.-Arch dude (talk) 04:03, 17 September 2017 (UTC) I don't want to ask the professor because he introduced concepts like gage pressure without much explanation and refused to answer a student who asked about it. He didn't mention anything about cooking a vacuum. Also, if the atmospheric pressure from the air inside the pot is negligible, then so is the atm outside, no? Imagine Reason (talk) 04:46, 17 September 2017 (UTC) You are at the start of thermodynamic lessons obviously, else you had not asked your question. It will be answered later on. You will learn to use the Gas constant, learn about Bernoulli's principle etc. Your professor probably tried to tease you to get you interested and wanting to learn. You still have to learn, to understand, which obviously seems much more boring then trying to bend your mind around it with what you already believe to understand. Have some patience, do your lessons and you answer this question and other new ones yourself. --Kharon (talk) 05:40, 17 September 2017 (UTC) If the atmospheric pressure from the air inside the pot is negligible, then the atmospheric pressure from the outside should be neglected as well, no? It's not really a matter of atmospheric pressure inside the pot being negligible (i.e. so small it can be ignored). Atmospheric pressure is caused by the weight of the atmosphere. After the lid is put on, the gas inside the pot is isolated from the weight of the atmosphere, so it doesn't make sense to use the weight of the atmosphere to model the pressure inside the pot (except perhaps when the lid is first put on and nothing has changed inside the pot yet). Outside the pot, however, the weight of the atmosphere is still pushing down on the lid (even after the pot heats up), so atmospheric pressure must be taken into account.--Wikimedes (talk) 19:43, 17 September 2017 (UTC) The partial pressure contribution of the air molecules is negligible because they are a negligible percentage of the total gas in the pot after it commences boiling. Essentially all of the gas is steam, as the steam molecules increase while the number of air molecules remains fixed. ("air molecules": the usual mix of gases. "steam molecules": H2O). -Arch dude (talk) 20:34, 17 September 2017 (UTC) Forgive me for still not understanding. Right after you close the lid, the air inside the pot is lifting the cover with 1 atm of pressure. Then you apply heat to the contents of the pot. The air inside then should contribute more than 1 atm of pressure now, thus canceling the weight of the air on top of the cover. Imagine Reason (talk) 03:28, 18 September 2017 (UTC) When the air inside the pot is contributing more than 1 atm of pressure, it no longer exactly cancels the external atmospheric pressure. At this point you have to consider them as separate forces acting on the lid, rather than ignoring them.--Wikimedes (talk) 05:09, 18 September 2017 (UTC) See Dalton's law. If the gas under the lid is 50% air and 50% steam, then the equilibrium pressure of the mixture will be that of the steam, which will be given by the Vapour pressure of water. HTH, Robinh (talk) ## Feynman Lectures. Exercises. Exercise 14-15 JPG ... 14-15. A certain spring has a force constant k. If it is stretched to a new equilibrium length within its linear range, by a constant force F, show that it has the same force constant for displacements form the new equilibrium position. —  R. B. Leighton , Feynman Lectures on Physics. Exercises In Solutions they write: Let the spring is stretched by the force F0. The displacement can be found from F0 = k x0. Let's stretch the spring more by x. Then new force is : k(x0 + x) = k x0 + kx = F0 + kx. So extra force is the same as if the spring is stretched from undisturbed state. But the Solution' author uses k=const from beginning of the proof. If k is some function of x, and F = k(x)•x only for very small x, then how to prove the exercise? In other words, we have the undisturbed spring length L1 and we know the law for it: FI = k1 x for small x. And we have the stretched spring length L2 and the law FII = k2 x for small x. From these two laws it is clear that F≠k2(L2 - L1)≠k1(L2 - L1). Username160611000000 (talk) 19:02, 16 September 2017 (UTC) If a spring has a force constant, then k should be a constant. It is ridiculous that we are asked to prove k = const, and this is given under the statement of the problem. No, I think we should not use k = const. Besides the exercise is to Lectures 13,14 "Work and Potential Energy".Username160611000000 (talk) 08:14, 17 September 2017 (UTC) I think you're being asked to prove the constant is constant from the new position. Yes, it's pretty elementary, I mean, k(x+dx) - kx = kdx or something. They can't all be stumpers. Wnt (talk) 18:15, 17 September 2017 (UTC) If k increases in the spring stretched under force, it seems apparent the new k will be the "force constant" for very small deviations, if the function derivative is continuous. i.e. FII = k2(x +- delta x) for small delta x. Wnt (talk) 19:35, 16 September 2017 (UTC) It is not impossible, e.g. k = sin (x). Then for x = 0.5π, 1.5π etc. the derivative = 0 and so k(x) = k(x+dx).Username160611000000 (talk) 08:14, 17 September 2017 (UTC) yeah, but ... if k = 0 at 1.5 m, how is the pendulum going to stay there when it's under some continuous force? Also, this isn't really an exception = for this FII = 0 (x +- delta x). For small deviations it's not a spring, but it's not an exception. Wnt (talk) 09:34, 17 September 2017 (UTC) ## ancient glacial features It's easy to point to valleys that were shaped by glaciers in the last million years. But what are the oldest known glacial features? Are there glacial valleys in continents that were near the poles in Mesozoic times? —Tamfang (talk) 20:35, 16 September 2017 (UTC) Note that land didn't need to be near the poles to have glaciers. Glaciers covered much of Europe and North America in the most recent ice age, for example. StuRat (talk) 20:53, 16 September 2017 (UTC) See here. Count Iblis (talk) 21:17, 16 September 2017 (UTC) Global glaciations and atmospheric change at ca. 2.3 Ga. Count Iblis (talk) 22:14, 16 September 2017 (UTC) As others have noted about, we have seen signs of ancient glaciations going back billions of years. That said, features like carved valleys will be eroded and changed over time. Given 10-20 million years or so, most features won't be easily recognizable as glacier derived except by detailed study from experts. Dragons flight (talk) 09:11, 17 September 2017 (UTC) # September 17 ## Vaccination boosters Why are adults, other than those at high risk, not offered a booster for the chickenpox vaccination if the dose given at childhood doesn't last a lifetime? 90.198.254.50 (talk) 09:08, 17 September 2017 (UTC) You didn't say where but I see from your ip it is the UK. My guess is because the NHS is being stretched far too far by austerity and have far more important things to spend their money on. But even if it was offered it probably wouldn't be worth it till people were 60 and their immmune systems started to get weaker and they got more liable to get shingles, there's enough chickenpox around still anyway to keep the immune system primed normally. If we were halfway towards eradicating chickenpox it probably would be worthwhile doing booster shots more generally until it was totally eliminated - but that isn't viewed as any sort of important target at the moment. There have been efforts to eliminate measles but those have been stymied by stupidity and ignorance in the developed world in refusing the MMR vacine. Dmcq (talk) 12:30, 17 September 2017 (UTC) In the case of testing for bowel cancer every patient is written to every two years between the ages of 65 and 70 I believe, and the letters say that those over 70 are welcome to request a self - test pack. So far as I know, testing and vaccination for common diseases is given on request - I wouldn't like the OP to think (s)he can't have a vaccination if (s)he wants one. If you've had shingles once you can get it again, and vaccination does not provide complete protection. 82.14.24.95 (talk) 12:54, 17 September 2017 (UTC) Varicella vaccine is given two times not just one. It is often given on the same schedule (1 year and 4-6 years) as the vaccinations against measles, mumps and rubella (MMR). There are even combined quadrovalent vaccines for these diseases. The third adult (or booster) dose is not given by the same reason as for MMR: it is generally considered pointless. The vast majority will remain immune for life after two doses. So, returns are diminishing quickly for any additional dose. Ruslik_Zero 18:59, 17 September 2017 (UTC) • The Zoster vaccine article discusses the various US, UK, and European recommendations. It's basically a high-dose chickenpox vaccine for adults, either over 60 or at risk. -Arch dude (talk) 20:22, 17 September 2017 (UTC) There is a confusion here with zoster vaccines. They serve a different purpose and are give to those who are already infected. The question was about a vaccine to prevent the primary infection. Ruslik_Zero 20:28, 17 September 2017 (UTC) Our varicella vaccine article seems to me to discuss the issue well. What do you think is not adequately answered by the article? --47.138.161.183 (talk) 05:41, 18 September 2017 (UTC) Well I personally know of at least one person that contracted shingles just weeks after receiving a chicken-pox/zoster vaccine a few years back so (just a guess but) maybe that sort of occurrence has become common enough to motivate health officials to refrain from recommending the scheduling of that shot for all but the highest risk categories? My overall take on the whole thing basically this: a correctly prepared and properly stored vaccine administered at the right time to a person whose state of health is sufficiently "just so" will most likely experience positive-to-neutral results, otherwise there is always the inherent risk that comes with injecting foreign (and potentially dangerous) agents into one's system. And you can't necessarily rely on the conclusions reached in peer-reviewed studies because most are directly/indirectly funded by the health-care industry itself which is naturally in the business of selling stuff and minimizing legal liability. So look at the raw data and be very wary of interpretive statistics. For example, when the question comes up as to why adults are not typically offered a booster for the chickenpox vaccination, suppose you eventually come across papers such as this one [7] (which unequivocally asserts that there is "no evidence of a statistically significant change in the rate of increase [of shingles onset] after introduction of the varicella vaccination program") take careful note of the the highly-presumptive interpretation and then draw your own conclusions accordingly using just a bit of logic and reasoning. 73.232.241.1 (talk) 12:57, 19 September 2017 (UTC) Exactly what in all that conspiracy theory contradicts what I said above? I said there was enough chickenpox around to keep up the immune system primed normally. However older peoples immune system can be weaker and need a bigger jolt to keep in order. Dmcq (talk) 17:30, 19 September 2017 (UTC) ## Is it true to say that "in every disease the entire body is involved"? I'm reading now a medical book which stays "in every disease the entire body is involved". Now, I am not sure if every disease involves every system in the body. 93.126.88.30 (talk) 15:09, 17 September 2017 (UTC) Yes, that does seem a bit silly. Acne doesn't normally effect your pancreas, for example. Perhaps they could say "potentially could become involved", as an infection that starts from, say, acne, could eventually spread to, say, the pancreas. StuRat (talk) 15:45, 17 September 2017 (UTC) • What do you mean by "medical" here? It's clearly starting from the point of holistic medicine, but the woo woo is strong is such fields and the risk is that it can easily diverge into nonsense such as homeopathy. Discussion and rational coverage of such fields is forbidden at WP, and any detailed coverage of them is just blanked with a redirect to a single simple bucket of ridicule, as if all are equally unworthy of consideration. Andy Dingley (talk) 15:55, 17 September 2017 (UTC) • If I get a severe cold or flu, it certainly "seems like" the entire body is involved, but to turn that experience into some kind of generalized truism seems highly suspect. ←Baseball Bugs What's up, Doc? carrots→ 16:08, 17 September 2017 (UTC) • The OP needs to cite title, author, edition and page if he wants any sort of real help. And presumably the author has defined his terms and that sentence does not sit alone, but is found within an argument. I wouldn't come here with the sentence from one of my favorite books, "Now it's broken and needs to be fixed", and expect comment if I didn't give the context so that editors could make sense of its significance. μηδείς (talk) 17:46, 17 September 2017 (UTC) Pedantic and off topic. The following discussion has been closed. Please do not modify it. • Not everything requires context. "Elephants are a kind of fish" is just plain wrong, regardless of context. On the other hand, "whales are a kind of fish", could be correct, in some contexts, like "According to the archaic classification system of X, whales are a type of fish". StuRat (talk) 19:24, 17 September 2017 (UTC) But elephants are fish, at least cladistically... it's in the lead section of the article. Wnt (talk) 19:39, 17 September 2017 (UTC) Yes, Wnt, I had to laugh when Stu laid that whopper. Stu also thinks a graphic from the US Forestry Service hosted at the Boston Globe showing the reforestation of the Eastern US since 1900 is right-wing hate speech. Oh, well. A little knowledge is a dangerous thing. μηδείς (talk) 20:39, 17 September 2017 (UTC) No, your misrepresentation of what it said was right-wing misinformation, not hate speech. StuRat (talk) 20:49, 17 September 2017 (UTC) Well, no, that's not what you said either, but I knew I could count on you to take the bait. Here's the link to your original suggestion that a Forestry Department report at the Boston Globe was misinformation from a right-wing website. You have to understand, Stu, that some of us are actually credentialled in some of the fields we comment on, and know what sources to go to to prove our point, rather than just immediately answer every question posted on these desks based on some Faygo-intoxicated guess. μηδείς (talk) 01:41, 18 September 2017 (UTC) That's exactly what I said and I stand by it. If you want to debate it, that was the place, not here. That link DOES NOT SAY, AS YOU CLAIM "According to the trend from 1850 to 1920, according to this government map, the US would be entirely void of trees at this point." That's you just making crap up. StuRat (talk) 01:56, 18 September 2017 (UTC) I'm all for correcting people to be excessively pedantic, but not just to make a case against them (I'm not directing this at any particular party). We should return to the topic. Wnt (talk) 02:17, 18 September 2017 (UTC) Agreed. Hatted. StuRat (talk) 02:26, 18 September 2017 (UTC) I think the idea is pretty much correct -- at least, if we take "involved" to mean potentially affected during the future course of disease, rather than noticeably damaged. For example, acne might cause dermatillomania, depression, and apparently on some famous occasions can lead to septicemia and even death. Obviously, an intact and well-functioning immune system should keep the acne contained, but then again a truly well-functioning immune system probably ought to have stopped the acne to begin with. I would anticipate that the moral of the story might be the need for a complete medical history... no matter what. Wnt (talk) 18:11, 17 September 2017 (UTC) The difference between "involved" and "could potentially become involved" seems rather critical in a medical context. If a doctor said "your pancreas is involved" instead of "could potentially become involved", that would be malpractice. StuRat (talk) 19:21, 17 September 2017 (UTC) • Every disease might be an overstatement, but the great majority of diseases activate either a stress response or the immune system, both of which cause changes pretty much throughout the body. Looie496 (talk) 19:06, 17 September 2017 (UTC) ## "normally closed" relays Please could you tell me what consumer electronics are likely to contain "normally closed" relays. I know microwaves contain "normally open" relays which switch to "normally closed" when power is applied. Thanks for your time. — Preceding unsigned comment added by 36.85.29.51 (talk) 18:17, 17 September 2017 (UTC) Some type of overheat switch which opens when the device overheats, then you hit the reset button to close it again ? For example, a hair dryer might have this. StuRat (talk) 19:11, 17 September 2017 (UTC) I think my first electronic experimenter kit as a kid had at least one of each ... I would make those things buzz... the FCC probably had a van out looking for me. ;) Wnt (talk) 19:43, 17 September 2017 (UTC) • An un-interruptable power supply (UPS) might have a NC relay on its mains input. The mains will be connected to the UPS output unless the UPS actively decides otherwise and disconnects the mains by energizing the relay coil. (In fact, the relay will probably be SPDT with the mains on the NC input and the inverter on the NO input) -Arch dude (talk) 20:17, 17 September 2017 (UTC) Sounds like you already got some good answers, but to be clear, most general purpose relays have both a normally open and normally closed pin making it a SPDT (single-pole double-throw) switch. As far as applications, they're all over the place. A magnetic door lock for example, and as mentioned anything that during a power loss, you want it to close a connection (probably to some other system), so you see them a lot in safety type applications where you need a "cut-off" of some sort. As just a fun fact, you can use a DPDT relay to make a forward / reverse controller for a motor. When the relay is activated, the motor goes forward, when de-activated, it reverses the polarity, see these in kid's electric cars a lot. Drewmutt (^ᴥ^) talk 02:13, 20 September 2017 (UTC) ## Walking/running downhill energy consumption Walking/running downhill: what consumes more energy for a human?--Hofhof (talk) 21:08, 17 September 2017 (UTC) Considering the high amount of energy we burn just standing still, whatever gets you to the bottom fastest would "stop the clock" and thus reduce the total amount of energy burnt for the trip. That would be running. However, if you phrase the Q differently, such as "What method will get you to the bottom of the hill and consume the least energy, over the course of an hour", then a slow walk would probably be better, since the clock no longer stops when you get to the bottom. The temperature would figure into it, though. If it's so cold that mere walking results in shivering, then running may conserve energy. StuRat (talk) 21:14, 17 September 2017 (UTC) • The steepness is also relevant if you're having balance problems: if it's so steep that you risk falling down the hill, your effort to balance yourself and stay on your feet might take a bit of energy, and of course you're likely to be steadier (and thus require less balancing effort) if you're going more slowly. Of course, if you fall down on a steep slope, you may expend no energy (other than the basal metabolic rate) in descending the rest of the hill, but that might cause the expenditure of energy for other purposes. Nyttend (talk) 01:55, 18 September 2017 (UTC) According to the laws of physics downwards should be far easier and with lower energy consumption. However it seems to be the opposite, with walking, in reality. Not because of physics but because of lake of training, humans are not used to constantly decelerate bipedal motion, thus usually very ineffective in doing that and thus, physically paradox, need more energy walking downhill. --Kharon (talk) 05:15, 18 September 2017 (UTC) I used to do a lot of hillwalking and the descent phase always gave me sore quadriceps femoris muscles. A lot of walkers get knee pain from descents, [8] a condition called Iliotibial Band Syndrome. It's the stress of controlling the speed of your footfall, a problem that is going to be worse if you're running; more speed equals greater kinetic energy. I have seen people lose control when descending too fast, a jog turns into enormous strides, turns into head-over-heels tumbling. Quite comical unless you meet any large rocks on the way. Alansplodge (talk) 10:14, 18 September 2017 (UTC) When I climbed Mount Kinabalu on the standard guided tourist route from Timpohon Gate, I had a similar experience. Ascending was fairly tiring. Normally done over two days which is what I did, I think that's mostly because if you want to do it over 1 day you have to make good time and also may not reach the summit at the best of times (it can cloud over sunrise) and also for commercial reasons. Unless you're fairly fit and making good time, on the first phase you'll have porters speeding past you with large gas bottles etc on their backs, which at very least means you probably aren't going to moan when you see the food prices in Laban Rata (even despite recognising the porters aren't getting much from that). The second phase from Laban Rata is not as long although I think steeper on average (the vertical ascent is less than the first phase but the distance is even less) and with more tricky terrain and also generally done mostly when it's dark and given the altitude you may get mild altitude sickness. The descent follows on from the second phase of the climb (i.e. the total climb is over 2 days). From memory, the descent is not quite so tiring overall but boy can you feel it in the legs as most information guides note [9], probably for days afterwards. And this was as a ~19 year old albeit not that fit. IIRC my travel companion actually read stuff suggesting that some people found jogging or running down was actually kinder on the legs, but it's not something I researched myself. This source for example [10] suggest other things and on another issue it notes that doing it may seem to reduce pain on descent, but actually increase the required recovery time. Nil Einne (talk) 12:34, 18 September 2017 (UTC) • User:Kharon, the sources I cited said you always expend less energy walking or running down hill, but that at steeper slopes the energy saved is less because the stride becomes irregular. At some point you are simply climbing downhill, and using your legs to break a fall, but at that point it's not really walking or running anymore. User:Alansplodge, one of the sources actually recommends running down hill as a good way to build quadriceps muscle, hence your soreness on doing this--its a different type of exercise. μηδείς (talk) 20:44, 18 September 2017 (UTC) Note to self - always read the references that you link :-) Most older hillwalkers in the UK use trekking poles in an attempt to reduce the impact on their knees. Another point is that when planning a route using Naismith's rule, you add time for ascent (usually 1 minute for every 10 metres up) but don't add time for descent. Alansplodge (talk) 10:13, 19 September 2017 (UTC) Perhaps we should start using Tobler's hiking function instead of Naismith, though the spike in its graph makes it look rather implausible to me. AndrewWTaylor (talk) 16:28, 19 September 2017 (UTC) But if you would use an ebike that has an recuperation capable controller, you could even fill up your batteries on the way. So be smart, buy your very own ebike asap!:D --Kharon (talk) 21:49, 19 September 2017 (UTC) ## Civil engineering asset management Is asset management and ongoing maintenance in civil engineering generally more closely aligned to civil engineering design and consulting as opposed to construction? 90.198.254.50 (talk) 22:30, 17 September 2017 (UTC) Civil engineering is a huge field of Work and civil engineers are typically "jack of all trades" since they have to recon with everything that may happen to their construction, design or planning. So ofcourse asset management and maintenance seems to have no direct connection to design, construction and its planning but since it usually is an essential part of all that is designed, constructed and planned, they have to learn how that is done too. --Kharon (talk) 05:33, 18 September 2017 (UTC) # September 18 ## Disability-affected life years from falls According to this map, in 2004, Iraq had by far the world's worst statistics for disability-adjusted life years related to Falling (accident). It accurately reflects its source, according to which, 50 of the 192 countries tracked had a score of 200+, with 28 being 200-299, 13 being 300-399, 6 being 400-499, and the top three countries being Yemen at 524, Sri Lanka at 649, and Iraq at 1002. Why would Iraq have such horrid statistics? Is this merely an artifact of damage to the medical infrastructure during the ongoing war? It seems a bit extreme, especially since Afghanistan and Somalia, which both had less medical infrastructure in 2000 than Iraq, were tied for eighth at 417, and another country with a war during the data collection, Liberia, had just 133. Nyttend (talk) 01:53, 18 September 2017 (UTC) ISIS has a habit of throwing people off roofs (homosexuals): [11]. If those are included in the data, that might explain it. StuRat (talk) 02:06, 18 September 2017 (UTC) I strongly doubt that ISIS is at all relevant, since this map relies on figures published in 2004. Nyttend (talk) 02:26, 18 September 2017 (UTC) Good point, but such violent practices did not begin with there with ISIS. Their predecessors were also quite violent. StuRat (talk) 02:29, 18 September 2017 (UTC) And they were really good at making it look like an accident? Of 19 (talk) 22:37, 18 September 2017 (UTC) Somehow I doubt if there was much of an investigation when they found a body on the street under a tall building with fall injuries there, unless the victim was a friend or relative of whoever was in charge at the time. StuRat (talk) 23:01, 18 September 2017 (UTC) Those awful DALY things from the beginning included an absolutely arbitrary curve claiming that young adults are more important than anybody else; they were then immediately used to show that more funding emphasis should be put on treating diseases of young adults. Think of a slavemaster prioritizing his stock. Scientific fraud is almost too kind a description for this. That said, the article says that some sources have stopped using the arbitrary curve (though this indeed is its own kind of arbitrary decision, and another value judgment ...) The figure points straight at an Excel spreadsheet as its source ... it might be worth looking into. My thought is that if Iraq is a war zone that has chased out all the old and young people, it may have more people per capita, by DALY standards, than other countries. Wnt (talk) 02:14, 18 September 2017 (UTC) • According to http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0131834, in Iraq the majority of falls occur in houses, and the majority of victims are women. Only a small fraction are fatal. The causes are not clarified, but I suspect that one of them is the fact that the great majority of Iraqis sleep on their roofs during most of the year -- so they spend a lot of time going up and down stairs while sleepy. Looie496 (talk) 02:53, 18 September 2017 (UTC) Holy.... what's wrong with Motherland. Do they think they're cats or something. "See, Ivan, 5 storeys isn't high. Hold my beer..." 78.53.109.203 (talk) 03:16, 18 September 2017 (UTC) ## Stud finder Yesterday I had to do some assembly work that required drilling into the wall, and my fancy-schmancy \$50 stud finder indicated a wall stud just where I wanted one to be, but when I drilled into that "stud", there was only drywall with nothing behind it! What could have caused this gadget to give me the false positive, and (more importantly) how can I prevent this from happening again? 2601:646:8E01:7E0B:3DB7:8D6E:A762:14CC (talk) 07:47, 18 September 2017 (UTC) 1. How some work: http://www.nytimes.com/2001/04/12/technology/how-it-works-detectors-can-find-just-the-right-spot-to-drive-that-nail.html 2. How to lessen chance of false positives: http://zirconcorp.custhelp.com/app/answers/detail/a_id/28/~/is-the-stud-you-found-really-a-stud%3F-minimize-false-positives. 129.55.200.20 (talk) 13:09, 18 September 2017 (UTC) Thanks! So the key is to look for regular intervals, right? 2601:646:8E01:7E0B:EDA1:6DA2:2F4C:8E0A (talk) 05:43, 19 September 2017 (UTC) Most stud finders find a difference, not a stud. If there is a 4 inch air gap next to a 2 inch air gap, that is a difference. It will beep or light up when you are over the difference. When you find a stud, the indicator goes off on each side of the stud. Mark both sides. It should be about 1.5 to 2 inches wide, depending on what is used for the stud. Move to both sides. You should find another stud on each side about 14 or 15 inches away. They are normally set 16 inches to the center of each stud, leaving about a 14 to 15 inch gap between studs. If you find that, you likely have a stud. If you don't, you need to keep looking. Also, don't assume there is a stud. When people do work themselves, they do all kinds of silly things like putting up drywall without any studs behind it. 209.149.113.5 (talk) 13:02, 19 September 2017 (UTC) • Most studs in your house are vertical and run from floor to ceiling. Therefore, you can cross-check the stud location by looking for it at several heights on the wall. If these separate heights disagree, be suspicious of a false reading or an unusual situation. Find the same stud from both sides at each height. -Arch dude (talk) 14:48, 19 September 2017 (UTC) • It's worth noting that most stud finders out there are "edge" stud finders. As previously mentioned, they sense change, so it's important that when you initially start scanning you do so in place known not to have a stud, for calibration. Or, you can do as I do and use a strong magnet to find the screws that anchored the drywall to the stud. Drewmutt (^ᴥ^) talk 02:21, 20 September 2017 (UTC) ## Do tree trunks really bend at 32 miles per hour? That's what the Beaufort scale implies. How much could a deciduous trunk bend before breaking? (though in some cases it would uproot before that happens). Why is the Beaufort scale 12 description "devastation" while the Category 1 hurricane description doesn't seem as bad? (and Floridians even drive over bridges in that). Sagittarian Milky Way (talk) 09:59, 18 September 2017 (UTC) The best reference I could find is Trees, regardless of size, all break at the same wind speed. Here’s why. from the American Association for the Advancement of Science. Anecdotally, a youth activity centre in Essex used to have a wooden abseiling wall attached to an old sweet chestnut coppice (before the days of health & safety obsession) which had three large trunks each of more than 2 metres circumference. Even in a light breeze, the top platform which sat between the three trunks would sway rather alarmingly and had to be closed on windy days. Although the trunks weren't bending visibly, the swaying motion could only have been caused by the flexion of the trunks. Alansplodge (talk) 10:35, 18 September 2017 (UTC) That article actually says the wind speed required does vary a bit, but not by much, for the factors they considered (wood elasticity and tree size). They also didn't look at the effects of leaves. A tree with lots of leaves (that stay on during the storm) will absorb more wind than a leafless tree. And some tree forms, like palm trees, with only a small clump of leaves at the top and a massive trunk, seem like they evolved to resist tropical storms better, too. StuRat (talk) 23:14, 18 September 2017 (UTC) Tree trunks would bend at any wind speed, just not noticeable so at low speeds. For that matter, any material would bend, with crystals bending perhaps the least, but still a tiny bit. Buildings bend and sway, too. StuRat (talk) 23:04, 18 September 2017 (UTC) Presumably the person who extended the scale to land meant "visible to the naked eye" when he said whole trees in motion = 7, large branches in motion = 6. In our tropical storms I'd been so distracted by trying to get "air" from sprinting downhill with umbrellas and seeing how many degrees I could lean that I forgot to test the whole trees in motion thing. Sagittarian Milky Way (talk) 23:49, 18 September 2017 (UTC) Yes, the point of those notes is that a layman can assess the wind speed without any specialist knowledge or technical equipment. Out article says it was adapted "in the 1850s". Alansplodge (talk) 10:18, 19 September 2017 (UTC) ## activated charcoal effect on carbohydrates, proteins and fats The activated charcoal article specifically states that it does not work on alcohol. What about the other macronutrients; carbohydrates, sugar/glucose, proteins and fats? Does it bind to those and prevent their absorption by the gastrointestinal tract? I've tried researching this on my own but I can't seem to find an answer one way or the other. I realize that carbohydrates, proteins and fats are not poisons but how would activated charcoal differentiate between a poison and a macronutrient? I would assume that it would bind to all of them. Thanks for your help. — Preceding unsigned comment added by 62.16.22.76 (talk) 11:32, 18 September 2017 (UTC) You didn't say so, but are you asking because of one of the activated carbon diet and/or "detox" fads? If so, I'd suggest you save your time and money. By weight, in a pure solution, you may need 10 or 100 times as much activated charcoal as the substance you want to adsorb before you can capture most of it. For small quantities of poison, that may not be too hard to accomplish, but for bulk foods the corresponding quantities involved are likely to be impractical or even dangerous. Dragons flight (talk) 13:48, 18 September 2017 (UTC) I'm not following any kind of fad diet, I was just curious when reading the article because the article specifically mentions activated charcoal does not work on alcohol but omits to mention any of the other macronutrients. I wondered if that was because activated charcoal does indeed work on the other macronutrients, so I searched for answers but found nothing helpful. Still curious, I then came to the reference desk to ask in the hope that someone more knowledgeable and better at finding references than me could shed some light onto the question. I hope that is okay. 62.16.22.76 (talk) 14:38, 18 September 2017 (UTC) The "Mechanism of action" section of that article is woefully unreferenced, if some kind editor has time to take a look at it. shoy (reactions) 15:05, 18 September 2017 (UTC) I doubt if much of any of those would pass through an activated charcoal filter. They would quickly clog it. As for consuming activated charcoal, you'd need a huge quantity to bind to all that, which would make you sick. BTW, if you want a fat binder, try Orlistat (but be prepared for "anal leakage"). StuRat (talk) 16:29, 18 September 2017 (UTC) ## Does normal charcoal have any poison absorbing properties? I understand that activated charcoal greatly increases the surface area of charcoal to make it effective at absorbing poisons. But does normal charcoal have any poison absorbing properties? Again, I did try researching this myself but everything I found was about activated charcoal. I want to know if normal charcoal can bind to poison or not. Thanks for your help. 62.16.22.76 (talk) 11:50, 18 September 2017 (UTC) Yes, but they work by adsorbing material onto their surface, rather than absorbing it into their bulk. Activated charcoal is just charcoal produced by a process (either thermal or chemical and thermal) which increases this surface area. Andy Dingley (talk) 12:32, 18 September 2017 (UTC) ## How useful is the telescopic sight on a rifle When shooting a rifle with a telescopic sight, the bullet will drift due to the wind and gravity to one side and down. Since it will never hit the crosshair, would it be any difference if the shooter had some binoculars attached to his face or helmet?--Hofhof (talk) 17:12, 18 September 2017 (UTC) It will hit the crosshair. The turrets on the side of the sight allow the crosshair position to be adjusted, to compensate for bullet drop, windage etc. When the sight is 'dialed in' for the range and conditions, the bullet should go where the crosshair indicates. Although simple "crosshairs" aren't the only form of reticle available. Some have a scale on them, so that the shooter chooses an aim point on that scale, rather than adjusting the turrets. Andy Dingley (talk) 17:18, 18 September 2017 (UTC) Should a marksman use head-mounted binoculars he would have to sight the bead of the rife which would end up looking fuzzy at short short focal distances. The bead is also rather large and can be larger than the target at 400 yards on some rifles. Of course, he may have infinity focus binoculars – if there are such things. P.S. Why do we call them cross hairs. All my sights up until I gave up shooting used spiders web ( with the exception of a laser sight which I borrowed). Think also, that AD was over simplifying things. On a 2500 range with a 303 one need little flags all down the range to show moment to moment fluctuations in crosswind. It is nearly one and a half miles and the round takes several seconds to miss the targets I was hopefully aiming at, regardless of where the scope was pointing at. Aspro (talk) 19:07, 18 September 2017 (UTC) For an example of actual-hair "hairs", see Mary Babnik Brown. DMacks (talk) 19:21, 18 September 2017 (UTC) Articles: Telescopic sight, Reticle. Blooteuth (talk) 23:19, 18 September 2017 (UTC) ## Can sniper rifles be automated ? I've often thought that sniper rifles should be automated. Some reasons: 1) The shooter's movements make it difficult to maintain aim. If the operator was instead seated some distance away, or even on the other side of the world, controlling the apparatus remotely, this variable would be removed. Also, the shooter could then be hidden from snipers in the opposing army. 2) An automatic correction method could be used, where the target is painted with a laser (possibly IR, to keep it hidden), then each shot would be tracked (perhaps tracer rounds would be needed for this) and the error corrected for each subsequent shot. I would suggest 2 or 3 rifles in an array, all precisely aimed the same, yet physically isolated so as to not be affected when another fires. This would allow multiple shots in rapid succession, each correcting for the previous error, without allowing the target time to flee. I would suspect that this setup could significantly extend the effective range, and require far less training and experience to use effectively. Also, fewer snipers would be needed, since the same one could operate weapons in multiple locations without needing to travel from location to location (somebody else could roughly position them). So, has anyone done this ? If not, why not ? StuRat (talk) 20:40, 18 September 2017 (UTC) The US already have semi- automated sniper rifles. The operator identifies the target's thermal infra-red signature and if the target should lean against a tree, suggesting he may be there long enough for the round to reach him/her/it – gun goes off (no need to squeeze the trigger) . Should target move earlier than expected – well EXACTO will correct the trajectory in-flight. This type of technology is what your tax dollars are paying for. When the 3rd world war comes, just follow me into a cave and let Arnold Schwarzenegger and Linda Hamilton sort it all out for us, so that we can look forward to a sequel, providing of course that they win, in a real war. Aspro (talk) 21:25, 18 September 2017 Articles: Sniper, Narcissistic personality disorder, Serial killer. Blooteuth (talk) 23:07, 18 September 2017 (UTC) (UTC) I've heard that bolt-action rifles tend to be the most accurate, with the design (not sure how, but...) being more accurate overall than semiautomatic rifles; no source, but it's something to look into. Remember that with a sniper rifle the point is being supremely accurate and delivering sufficient firepower in the perfect spot, not delivering a ton of firepower to the general area; it's like using smart bombs versus carpet-bombing. Also, "fewer snipers" — part of the point of having lots of snipers is redundancy: individual snipers are vulnerable, having lots of snipers means that you can cover the enemy from lots of angles, and eliminating one sniper doesn't do the enemy much good if he can't get rid of the rest. Nyttend (talk) 23:21, 18 September 2017 (UTC) A couple of problems with this proposal: (1) The loss of the datalink between the operator and the rifle positions (such as through enemy jamming) will make the latter useless; (2) A remotely-controlled rifle cannot change position unless mounted on a mobile platform, and even then, such a platform cannot conceal its movements as well as a human sniper (i.e. it will be easier for the enemy to see and destroy, or at least get out of its line of fire). 2601:646:8E01:7E0B:EDA1:6DA2:2F4C:8E0A (talk) 05:41, 19 September 2017 (UTC) This "mobile platform" could be made rather stealthy. Place it in position when no enemy is around (or perhaps they are distracted by an attack from elsewhere), put the equivalent of a ghillie suit over it, and have it move slowly enough that nobody spots the movements. No need for breaks for food, water, the toilet, etc., should then make it less noticeable over the following days, as should a lack of IR light leaking out. As for the datalink, you could do a direct shielded wire to somebody in a bunker nearby, and they could take over if jamming occurs. StuRat (talk) 17:44, 19 September 2017 (UTC) While you're at it ... what has kept them from putting sniper rifles on drone aircraft? I mean, obviously the bombs are more powerful, but having some drone far away -- potentially solar powered, even -- making virtually unlimited shots would be damned demoralizing, emphasis on the damned. Wnt (talk) 08:59, 19 September 2017 (UTC) That's the Autonomous Rotorcraft Sniper System. There are technical issues to be dealt with. To shoot a bullet instead of a missile implies stronger recoil. You'll need a more solid base. You'll need a bigger drone. This defies the purpose of shooting bullets for having a small drone. Since bullets are not guided, your sniper drone would also need to be more stable in flight. (also implies bigger drone, with maybe a longer wings or tail). Modern drones like the Predator drones fire AGM-114 Hellfire, which are bout 100 lb and can be used to disable armored vehicles, bridges and so on. Jihadi John was killed inside a vehicle by one of these. That would be hardly possible with a mere bullet. B8-tome (talk) 12:35, 19 September 2017 (UTC) (edit conflict) Wild speculation from my part, but it could have to do with the altitude. Killer drones fly high to avoid detection by potential targets; Unmanned_aerial_vehicle#Military pointed me to this page saying one of the earliest killer drones flies at an altitude of 26,000 feet (7,900 m), I am going to assume it is higher for more modern stuff. On the other hand, Sniper#21st_century gives a record sniping distance of 3.5km. Supposedly deviations in the air and the like prevent accuracy at larger distances; I doubt[citation needed] that a flying drone could beat a grounded human for steadiness of the aim. If so, maybe sniper drones would lose much of the operational advantage because of the required approach distance. TigraanClick here to contact me 12:49, 19 September 2017 (UTC) Of course it could be automated! But if both sides of a war automated their rifles, what would be the point of shooting? Nimur (talk) 14:21, 19 September 2017 (UTC) I can indeed imagine a time when warfare is only between drones. Once one side's drones destroy all the other side's, they would need to surrender or be wiped out. StuRat (talk) 17:34, 19 September 2017 (UTC) On 2: Sniper Rifles are already on the edge of still being mobile. For example the .50 caliber M96, broadly used in US Services, has a hefty Weight of 11.35 kg. Unloaded, with no scope attached! If you would add laser targeting and automatic aiming that Weight would easily triple or go beyond. A "mobile" BGM-71 TOW-Launcher weights 92.5 kg, to give you some direction what "mobile laser targeting system" only may add. If it was practical some army would have it today. But actually most armies already have what you imagine. Infact the Germans invented this in WWII. On some of their Infantry fighting vehicles (originally called "Schützenpanzer" which could be translated to "Snipertank"). --Kharon (talk) 21:00, 19 September 2017 (UTC) Btw. German Engineers perfectly implemented your imagination already in 1960. They called it Flakpanzer Gepard. In case you are interested in a promotional video impression: [12]. On top it was capable to operate autonomously without crew (stationary), only wired by cable to a hidden command post - in principle like a drone. Tho with its 50 tons weight hardly confused with todays drone "toys" ofcourse. --Kharon (talk) 04:04, 20 September 2017 (UTC) # September 19 ## Black hole vs. white hole A thought occurred to me: if a black hole is a region of spacetime from which nothing may leave, and a (wholly theoretical) white hole is one in which nothing may enter, what does the math say about what would happen if one of each of these objects were to meet? Thanks! – ClockworkSoul 01:01, 19 September 2017 (UTC) For as simple an answer as possible... Any object approaching a white hole will fall towards the event horizon around the white hole, but not enter the event horizon until it is a black hole. Because "any object" includes black holes, black holes falling towards a white hole won't reach the white hole event horizon while it is a white hole. 209.149.113.5 (talk) 12:54, 19 September 2017 (UTC) I've just about given up on this stuff. One of the better? sources for our article [13] says that white holes spew out matter, yet the matter they spew out will self-collapse from gravitation. So why doesn't it collapse before it comes out...? Hell, I still don't really understand Kruskal-Szekeres coordinates or why it looks like the black hole event horizon is moving out at the speed of light endlessly so that (apparently) everything will fall into it. Wnt (talk) 20:13, 19 September 2017 (UTC) Every normal, active Sun is a "white hole" if you like. Our Sun has a surface temperature of 5.000.000 degree Kelvin btw. I dare you to propose anything (exept another Sun) that could manage to fly near or even past aka enter that. --Kharon (talk) 21:17, 19 September 2017 (UTC) ## Caffeine allergy Can a caffeine allergy really kill someone who consumes even a small amount of caffeine? (Question inspired by the 4th case in Criminal Case: Pacific Gay.) 2601:646:8E01:7E0B:EDA1:6DA2:2F4C:8E0A (talk) 05:47, 19 September 2017 (UTC) It's possible. The medical consensus is that anaphylaxis and death can be caused by very small amounts of any antigen.[1] That said, caffeine allergies are rare, and I can't find any reports of deaths due to a caffeine allergy. C0617470r (talk) 07:15, 19 September 2017 (UTC) References Some may also confuse allergic reaction with the inability to metabolize caffeine. The latter could result in cardiovascular issues and death but without anaphylaxic shock (using a caffeine quantity that someone else could tolerate). Some animals are also more sensitive to caffeine because of that reason, like cats and dogs. —PaleoNeonate – 07:39, 19 September 2017 (UTC) This is a common confusion; that people confuse the meanings of a Food allergy with a Food intolerance. Food allergies always are immune system responses to an ingested substance, such as anaphalaxis, hives, or the like. Food intolerances can include things like excess gas, indigestion, acid reflux, feelings of bloating, or other responses which are not immune system responses. --Jayron32 10:45, 19 September 2017 (UTC) A well-known example being lactose intolerance, which is caused by the body being unable to digest lactose. --47.138.161.183 (talk) 05:41, 20 September 2017 (UTC) ## How many pages of information for a science degree? How many pages are needed to put all knowledge for a college degree? (for reading and as look-up material like dictionaries). Interested in computer science, mechanical engineering, physics.--Hofhof (talk) 16:06, 19 September 2017 (UTC) Much of that knowledge (nay, perhaps the bulk of it) is practical knowledge which is not learned through written material, but through direct instruction, lecture, labs, practice, etc. Also, "science degree" is far too vague, there are hundreds of different kinds of science degrees. Thirdly, I have no confidence this is the sort of thing anyone has ever bothered to study before (given my second point) which makes searching for such non-existent data fruitless. --Jayron32 16:47, 19 September 2017 (UTC) For an alternative link that is blue see experiential knowledge. Looie496 (talk) 21:38, 19 September 2017 (UTC) • I did perhaps half of an Applied Physics degree by understanding four books (I bought maybe just six textbooks over the course, then I read those until the ink wore out). The rest was dipping into books and journal papers in the library. Those four books though (Bleaney & Bleaney? [14], Solymar & Walsh?) represented serious effort to understand them. You could also just try counting how many pages there are in a copy of Feynman. I did physics partly because it involves hard reading of few books, rather than something like palaeontology where the corpus is perhaps less terrifying on a line-by-line basis, but much larger. Andy Dingley (talk) 17:04, 19 September 2017 (UTC) My answer gets in the same lines as the answers above. It's not enough to do the reading, but if you read thoroughly one good book per course, it would be reasonable to read 25 to be at graduate level.B8-tome (talk) 18:00, 19 September 2017 (UTC) 1000-1200 pages would do. When you memorized, understood and than can apply that much science, you earned your degree! In case its specialized Science it could be less. For example a Metallurgist could probably find everything in "Donald R. Askeland - The Science and Engineering of Materials, 2011" (only 896 pages thin). --Kharon (talk) 21:25, 19 September 2017 (UTC) I suspect that this has changed, and much of the need to memorize info has been replaced by the ability to access it instantly, when needed. Of course, this doesn't apply to all info. For example, I suspect a chemist still memorizes the atomic numbers, weights and electron configurations of the most common elements, but not the rarer elements and isotopes of each. StuRat (talk) 23:56, 19 September 2017 (UTC) • The majority of my physics course was taught with Tipler's Physics for Scientists and Engineers. This is 1172 pages, and it covers everything on the core syllabus. Of course, there are also elective modules which adds more (a random example: my biophysics module used Philip Nelson's Biological Physics, which is 600 pages, although we didn't cover every chapter) and there's stuff from other fields that overlapped (for instance, I needed a roughly 300 page linear algebra textbook and a C programming book), and many lecturers gave us 50-100 pages of notes. 2000 pages seems like a decent estimate. Smurrayinchester 08:36, 20 September 2017 (UTC) ## Common cold and temperature. It is often said that being in the cold has nothing to do with catching a common cold but people do catch a cold after being in the cold. Furthermore staying out in the cold seems to make it worse. I understand that viruses cause a cold but there must be an indirect way in which the low temperatures increase the chances of catching a cold. The old wives tale can't have survived this long if it's not true. 90.198.254.50 (talk) 17:17, 19 September 2017 (UTC) Yep, cold, dry air causes chapped skin, including in the respiratory tract, and breaks in the skin allow microbes in. There could also be effects on people spending more time inside, close together, causing more spread of disease. Vitamin D production from sunlight may also be down, but I don't know that this vitamin has an effect on colds.StuRat (talk) 17:23, 19 September 2017 (UTC) Scientists have done experiments to prove that being cold is not a direct cause of a cold (because a cold is always caused by a virus), but that doesn't mean it is not an indirect cause, as explained above. Often, being cold (a drop in core temperature) just allows existing viruses to gain a hold when they had previously been inhibited by the immune system. Dbfirs 18:17, 19 September 2017 (UTC) The problem is caused by being afraid of the cold. There are many people who turn on the heating when it gets colder than 20 C in their homes and who dress for winter when the outside temperature dips below 15 C. Their bodies stay acclimatized for tropical conditions all year round even in winter when it's -5 C outside. When they go outside in winter, their blood flow to their body extremities, their noses etc. shuts down. The immune system then becomes less active in their airways, making them susceptible to catching the cold. Another contributing effect here is that these people tend to not be outside for long during the cold periods. If they do regular exercise they prefer to do that indoors rather than outdoors, they hate the idea of running outside at, say 2 C. Weeks before many people get hit by the cold, the cold virus will already be there in Nature, if you run outside every day for an hour, you're likely to breath in that virus but in low concentrations that will lead to immunity without making you ill. Also, it may be a less virulent version of the virus that will later cause severe cold symptoms in many people. Count Iblis (talk) 20:43, 19 September 2017 (UTC) It's true, those who cringe at the thought of acclimating to the discomforts of environmental changes often seem to be the most susceptible to illness. Reminds me how I was once dared to dive into an ice-cold swimming pool situated next to a hot-tub. Several people there advised against it on the grounds that I might catch a cold, have a heart attack, etc. I did it anyway (several times) and exhausting as it was I actually felt quite invigorated for days after that. Another friend made a half-hearted attempt but quickly retreated out of discomfort and swears to this day that it only resulted in stiff muscles for him. I've always wondered though if that would have been the case had he truly been willing to "take the plunge"? 73.232.241.1 (talk) 05:06, 20 September 2017 (UTC) A cold happens when rhinoviruses implant in the nasal mucosa and manage to reproduce fast enough to outrun the immune response for a while. There is a good bit of evidence that extended exposure to cold air can cool the nasal tissues enough to weaken or delay the immune response there. However many doctors are skeptical that this mechanism plays more than a minor role in seasonality. Looie496 (talk) 21:33, 19 September 2017 (UTC) The low temperatures not only make the immune response less effective. The low temperatures also allow the viruses to remain virulent for a longer time outside the body (see e.g. [15]), facilitating the transmission. Dr Dima (talk) 22:19, 19 September 2017 (UTC) ## What exactly is "Thermally activated delayed fluorescence"? I've seen the term used many times, typically with regard to materials for OLEDs, and I'm not quite sure what it means. OrganoMetallurgy (talk) 17:31, 19 September 2017 (UTC) Delayed fluorescence sounds like phosphorescence, but the "thermally activated" bit sounds like it only emits light at certain temperature ranges. StuRat (talk) 17:37, 19 September 2017 (UTC) • This paper has a relatively decent intro, depending on your ability to read scientific papers, as well as keywords to feed to your bibliographic search. For instance, I suspect this paywalled chapter would be interesting to read (but I cannot go through the paywall). TigraanClick here to contact me 17:48, 19 September 2017 (UTC) There are three generations of OLED tech. The first were based on fluorescence, then the second on phosphorescence and now a third generation using this TADF. Fluorescent were inefficient, as they only put 25% of their energy into the fluorescent singlet states (I think that 25%'s a pretty fundamental limit from the 1:3 ratio). The phosphorescent ones managed to be efficient emitters by adding noble metal dopants which made the triplet states into practical emitters too, but these were expensive to make. The new generation doesn't need these metals, but potentially offers similar efficiencies. It overlaps with here: Intersystem crossing. They work by eliminating the triplet states, by converting them to useful singlets. This up-conversion obtains its energy thermally. Current SotA is here: Dias FB1, Penfold TJ, Monkman AP. (9 March 2017). "Photophysics of thermally activated delayed fluorescence molecules.". Methods Appl. Fluoresc. 5 (1): 012001. doi:10.1088/2050-6120/aa537e. (And why am I telling anyone with a username like that how to suck eggs?) Andy Dingley (talk) 18:10, 19 September 2017 (UTC) ## strange equation in physics class My daughter is taking physics II in college. She has the equation ${\displaystyle C={\frac {k\cdot \epsilon _{0}\cdot A}{d}}}$ (Sorry, I'm not too familiar with TeX), where C is capacitance, epsilon_0 is the permativity constant, A is the surface area and d is the distance. Usually k is Coulomb's constant, but she is wondering if k is something else here. I say that the equation doesn't make sense because in ${\displaystyle k\cdot \epsilon _{0}}$ the physical units cancel out, leaving Farads on the left and meters on the right. Does she have something wrong in this equation? Bubba73 You talkin' to me? 23:44, 19 September 2017 (UTC) May we see the units you are using for each term ? StuRat (talk) 23:52, 19 September 2017 (UTC) These are from her writeup of her notes. She has C in Farads, epsilon_0 in Farads/meter. She has ${\displaystyle k={\frac {1}{4\pi \epsilon _{0}}}{\frac {N\cdot m^{2}}{C^{2}}}}$ and I suppose area in m^2 and distance in meters. With this equation for k, the epsilon_0 cancels out in the first equation. Bubba73 You talkin' to me? 00:04, 20 September 2017 (UTC) This looks like Capacitance#Capacitors and Capacitor#Parallel-plate model. And it looks like there is some confusion over which epsilon is used for which context (permittivity or dielectric constant of whatever the material is vs the electric constant (an actual "constant"). If she's using epsilon_0 as the dielectric constant of the material in F/m, that's already confusing, but that would mean k is the Vacuum permittivity but without the units? Or else is she using epsilon_0 as the vacuum permittivity in its usual F/m units, with k being the dielectric constant of her material? DMacks (talk) 02:07, 20 September 2017 (UTC) Yes, k is a dimensionless constant here, representing the relative permittivity of the dielectric material that separates the plates. It is 1 for empty space, and larger than 1 for any other material. Looie496 (talk) 02:37, 20 September 2017 (UTC) To add to this, that's the equation for the capacitance of a parallel plate. It is a notable equation and she should understand its derivation and what it means. ε0 is the permittivity of vacuum, which has the lowest possible permittivity. So the equation says that if you have a better dielectric, you will have a better capacitor. And that's because the dielectric resists the electric field, so you will have a lower voltage difference for a given amount of charge, so you can store more charge in the capacitor at a given voltage. (Tangent: another practical reason to use a dielectric instead of air is to increase the dielectric strength. Dielectrics including air will break down and become conductive at high enough voltages, and we want that upper limit as high as possible.) C0617470r (talk) 03:06, 20 September 2017 (UTC) So the k in the equation at the top is NOT Coulomb's constant? In her notes, she has ${\displaystyle k={\frac {1}{4\pi \epsilon _{0}}}=8.99*10^{9}}$ (and SI units), which is Coulomb's constant. Bubba73 You talkin' to me? 03:18, 20 September 2017 (UTC) She talks about an example the teacher did "a 4.2 nF capicitor has an area of 2.8 m^2 and a separation of 12mm. What is the dielectric constant?" He used the equation at the top, solved for k, getting k=2.03. So this k is the dielectric constant, not Coulomb's constant? Bubba73 You talkin' to me? 04:42, 20 September 2017 (UTC) Right, it is not Coulomb's constant. Unfortunately they share the same symbol. C0617470r (talk) 04:46, 20 September 2017 (UTC) Coulomb's constant is usually written ${\displaystyle k_{\text{e}}}$ - she apparently failed to write down the subscript, leading to the confusion. Bubba73 You talkin' to me? 04:53, 20 September 2017 (UTC) It is also notable that "k" stands for a bewildering number of constants in scientific equations, to the point where the best assumption about its meaning is "whatever constant is used in this particular equation". Besides the aforementioned Coulomb's constant, there's also (just think off the top of my head, I am certain this list isn't exhaustive) reaction rate constant, the spring constant in Hooke's law, the Boltzmann constant, the generic proportionality constant, etc. etc. --Jayron32 10:59, 20 September 2017 (UTC) # September 20 ## Unknown plumeria leaf disease Well, it's a long shot, but figured I'd ask the smartest folks I know. My plumeria plant is having leaves die off, and I have no idea what's causing it. If it's helpful, I'm pretty close to the coast in southern California. They get maybe 75% sun and I only water them when they look like they need it. The plant as a whole looks healthy, although it's growing very slowly. I've had this problem with other plants, which were also in containers on my deck. Anywho, thanks in advance! Drewmutt (^ᴥ^) talk 06:15, 20 September 2017 (UTC) That looks like a problem with growing conditions, I can see no evidence of an external disease like insects or fungus. This could be a normal process where leaves mature and then fall. Is the plant being irrigated with water that may be slightly alkali, from a tap for example? Do you feed the plant regularly? Richard Avery (talk) 07:49, 20 September 2017 (UTC) Richard Avery Thanks so much for the quick reply. It's a good question about the water, it's possible, and I have some litmus papers handy (which good Wikipedian doesn't), so I can check tomorrow. Regarding feeding, I thought the same thing, since it presents itself like a K deficiency, so I got me a nice tame 5-5-5 organic fertilizer (not opposed to non-organic, btw) with limited success. Maybe it's just the wrong season? But I would expect them to just go yellow and fall as opposed to becoming necrotic. Drewmutt (^ᴥ^) talk 08:09, 20 September 2017 (UTC) • According to this, it may be a disease such as rust or black tip fungus (for which I cannot find a Wikipedia article). --Jayron32 10:54, 20 September 2017 (UTC)
2017-09-20 11:43:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6191959381103516, "perplexity": 2219.0239047710234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687255.13/warc/CC-MAIN-20170920104615-20170920124615-00098.warc.gz"}
http://www.visionforum.com/some-kind-fazztqc/f3da80-minkowski-distance-r
(Only the lower triangle of … If p = 1, we call L1 norm, they also call Manhattan or city block distance define this formula. Ask Question Asked 2 years, 3 months ago. r语言作为统计学一门语言,一直在小众领域闪耀着光芒。直到大数据的爆发,r语言变成了一门炙手可热的数据分析的利器。 Given two or more vectors, find distance … {\displaystyle p} 11. find the point which has shortest sum of distance from all points? Minkowski distance is typically used with r being 1 or 2, which correspond to the Manhattan distance and the Euclidean distance respectively. In this paper, I define a broad class of association measures for categorical variables based on weighted Minkowski distance. A distance metric is a function that defines a distance between two observations. The Minkowski distance of order Question: (a) What Is The Relationship Between The Distances Obtained From The Minkowski Distance Measures When R=1, R=2 And R-infinity? Find The Values Of The Minkowski Distance Between These Two Points When R=1, -2, -4 And R=8? The generic formula for Minkowski distance for 2 points p and q: is given by: Minkowski distance. Given two or more vectors, find distance … Contribute to modulus100/cluster-analysis-R development by creating an account on GitHub. To understand why, you have to remind some algebra. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. The case where p = 1 is equivalent to the Manhattan distance and the case where p = 2 is equivalent to the Euclidean distance . ★ Minkowski distance: Add an external link to your content for free. Here I demonstrate the distance matrix computations using the R function dist(). What is the relationship between the distances obtained from the Minkowski distance measures when r=1, r= view the full answer. 'minkowski' Minkowski distance. 2 Show that a stastistical distance is a real distance. The distance() function is implemented using the same logic as R’s base functions stats::dist() and takes a matrix or data.frame as input. Equation 3. In the limiting case of r reaching infinity, we obtain the Chebychev distance. pdist supports various distance metrics: Euclidean distance, standardized Euclidean distance, Mahalanobis distance, city block distance, Minkowski distance, Chebychev distance, cosine distance, correlation distance, Hamming distance, Jaccard distance, and Spearman distance. If you try ?dist, you'll see that dist has a p parameter specifically for Minkowski method of taking distance: dist (x, method = "minkowski", p = 2) The power of the Minkowski distance. being 1 or 2, which correspond to the Manhattan distance and the Euclidean distance, respectively. Firstly let’s prepare a small dataset to work with: # set seed to make example reproducible set.seed(123) test <- data.frame(x=sample(1:10000,7), y=sample(1:10000,7), z=sample(1:10000,7)) test x y z 1 2876 8925 1030 2 7883 5514 8998 3 4089 4566 2461 4 8828 9566 421 5 9401 4532 3278 6 456 6773 9541 7 … Different names for the Minkowski distance or Minkowski metric arise form the order: λ = 1 is the Manhattan distance. Given $\delta: E\times E \longrightarrow \mathbb{R}$ a distance function between elements of a universe set $E$, the Minkowski distance is a function $MinkowskiDis:E^n\times E^n \longrightarrow \mathbb{R}$ defined as $MinkowskiDis(u,v)=\left(\sum_{i=1}^{n}\delta'(u[i],v[i])^p\right)^{1/p},$ where $p$ is a positive integer. (where Computes the Minkowski distance between two numeric vectors for a given p. Numeric vector containing the first time series. Mahalanobis distance using the sample covariance of X, C = cov(X,'omitrows'). pdist supports various distance metrics: Euclidean distance, standardized Euclidean distance, Mahalanobis distance, city block distance, Minkowski distance, Chebychev distance, cosine distance, correlation distance, Hamming distance, Jaccard distance, and Spearman distance. scipy.spatial.distance.minkowski¶ scipy.spatial.distance.minkowski (u, v, p = 2, w = None) [source] ¶ Compute the Minkowski distance between two 1-D arrays. {\displaystyle 2^{1/p}>2} reaching negative infinity, we have: The Minkowski distance can also be viewed as a multiple of the power mean of the component-wise differences between P and Q. Minkowski distance is the generalised distance as can be seen in (2) [17]. The Minkowski distance is computed between the two numeric series using the following formula: $$D=\sqrt[p]{(x_i-y_i)^p)}$$ The two series must have the same length and p must be a positive integer value. . Description. 1 In the limiting case of p The resulting metric is also an F-norm. (b) Let (x1=0, Yl=0) And (x2=5, Y2=12) Be Two Points On A Two-dimensional Plane. Although theoretically infinite measures exist by varying the order of the equation just three have gained importance. m: An object with distance information to be converted to a "dist" object. Since this violates the triangle inequality, for For two vectors of ranked ordinal variables the Mahattan distance is sometimes called Footruler distance. p This distance is calculated with the help of the dist function of the proxy package. p Here I demonstrate the distance matrix computations using the R function dist(). ≥ L-p distance) between two vectors x and y is the p-th root of the sum of the absolute differences of their Cartesian coordinates raised to the p-th power: . (Which One Is Smaller And Which One Is Greater?) Special cases: When p=1 , the distance is known as the Manhattan distance . In comparator: Comparison Functions for Clustering and Record Linkage. The traditional Minkowski distances are induced by the corresponding Minkowski norms in real-valued vector spaces. That wouldn't be the case in hierarchical clustering. For Purpose: Compute the Minkowski distance between two variables. Topics Euclidean/Minkowski Metric, Spacelike, Timelike, Lightlike Social Media [Instagram] @prettymuchvideo Music TheFatRat - Fly Away feat. SciPy has a function called cityblock that returns the Manhattan Distance between two points.. Let’s now look at the next distance metric – Minkowski Distance. (Which One Is Smaller And Which One Is Greater?) R package There are a few conditions that the distance metric must satisfy: Cosine Index: Cosine distance measure for clustering determines the cosine of the angle between two vectors given by the following formula. Minkowski Distance is the generalized form of Euclidean and Manhattan Distance. Minkowski distance is used for distance similarity of vector. Topics Euclidean/Minkowski Metric, Spacelike, Timelike, Lightlike Social Media [Instagram] @prettymuchvideo Music TheFatRat - Fly Away feat. Use DistParameter to specify another value for C, where the matrix C is symmetric and positive definite. Kruskal 1964) is a generalised metric that includes others as special cases of the generalised form. This function can also be invoked by the wrapper function LPDistance. Then if … Viewed 333 times 1 $\begingroup$ Im currently doing a subject for data science, and have the following point that im trying to understand. p Active 2 years, 3 months ago. This distance is calculated with the help of the dist function of the proxy package. Previous question Next question Get more help from Chegg. {\displaystyle p} Manhattan Distance: We use Manhattan Distance if we need to calculate the distance between two data points in a grid like path. p Name: MINKOWSKI DISTANCE (LET) Type: Let Subcommand. Euclidean distance can be generalised using Minkowski norm also known as the p norm. Stability of results: k-means requires a random step at its initialization that may yield different results if the process is re-run. Re: Calculating Minkowski distance between two rows at 2016-04-25 17:10:39 from Begin Daniel Browse pgsql-general by date The distance is the proportion of bits in which only one is on amongst those in which at least one is on. The Minkowski distance (a.k.a. (b) Let (x1=0, Yl=0) And (x2=5, Y2=12) Be Two Points On A Two-dimensional Plane. Furthermore, to calculate this distance measure using ts, zoo or xts objects see TSDistances. Search: JavaScript-based HTML editors Minkowski spacetime Free HTML editors Length, distance, or range measuring devices 2011 World Single Distance Speed Skating Championships . 0. limits as number of points going to infinity at minkowski distance… Previous question Next question Get more help from Chegg. Distance used: Hierarchical clustering can virtually handle any distance metric while k-means rely on euclidean distances. r的极客理想系列文章,涵盖了r的思想,使用,工具,创新等的一系列要点,以我个人的学习和体验去诠释r的强大。. 1 (Only the lower triangle of … Attention: For efficiency reasons the use of consistency checks (like are the data models of the two instances exactly the same), is low. The Minkowski distance or Minkowski metric is a metric in a normed vector space which can be considered as a generalization of both the Euclidean distance and the Manhattan distance. Find The Values Of The Minkowski Distance Between These Two Points When R=1, -2, -4 And R=8? See the applications of Minkowshi distance and its visualization using an unit circle. For the default method, a "dist" object, or a matrix (of distances) or an object which can be coerced to such a matrix using as.matrix(). In R, dist() function can get the distance. For the default method, a "dist" object, or a matrix (of distances) or an object which can be coerced to such a matrix using as.matrix(). Mathematical Definition The Minkowski distance between two vectors may be defined as the geometric distance between two inputs with a variable scaling factor, power (λ). Mahalanobis distance is an effective multivariate distance metric that measures the distance between a point and a distribution. Minkowski Distance. In this work, we propose novel statistical symmetric distances based on the Minkowski's inequality for probability densities belonging to Lebesgue spaces. Viewed 333 times 1 $\begingroup$ Im currently doing a subject for data science, and have the following point that im trying to understand. Minkowski distance is a metric in a normed vector space. by Karl Kraepelin ( Book ) 2 editions published in 1929 in German and held by 3 WorldCat member libraries worldwide It is an extremely useful metric having, excellent applications in multivariate anomaly detection, classification on highly imbalanced datasets and one-class classification. Description Usage Arguments Value See Also Examples. Mainly, Minkowski distance is applied in machine learning to find out distance similarity. Despite looking very different, both the Euclidean and the Manhattan distances are both special cases of a more general metric: the Minkowsi distance. Let’s say, we want to calculate the distance, d, between two data points- x and y. copy pasted description.. Minkowski distance is a metric in a normed vector space. Examples Edit When p = 1, Minkowski distance is same as the Manhattan distance. Ask Question Asked 2 years, 3 months ago. The Minkowski distance is computed between the two numeric series using the following formula: The two series must have the same length and p must be a positive integer value. View Minkowski distance Research Papers on Academia.edu for free. p In mathematical physics, Minkowski space (or Minkowski spacetime) (/ m ɪ ŋ ˈ k ɔː f s k i,-ˈ k ɒ f-/) is a combination of three-dimensional Euclidean space and time into a four-dimensional manifold where the spacetime interval between any two events is independent of the inertial frame of reference in which they are recorded. Minkowski Distance – It is a metric intended for real-valued vector spaces. When p = 2, Minkowski distance is same as the Euclidean distance. , the Minkowski distance is a metric as a result of the Minkowski inequality. It is named after the German mathematician Hermann Minkowski. {\displaystyle p} In special relativity, the Minkowski spacetime is a four-dimensional manifold, created by Hermann Minkowski.It has four dimensions: three dimensions of space (x, y, z) and one dimension of time. CGAL::Weighted_Minkowski_distance Definition. m: An object with distance information to be converted to a "dist" object. The Minkowski distance defines a distance between two points in a normed vector space. Minkowski distance is a distance/ similarity measurement between two points in the normed vector space (N dimensional real space) and is a generalization of the Euclidean distance and the Manhattan distance. < p As we know, when we calculate the Minkowski distance, we can get different distance value with different p (The power of the Minkowski distance). 1 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks Christopher Choy [email protected] JunYoung Gwak [email protected] Silvio Savarese [email protected] Abstract In many robotics and VR/AR applications, 3D-videos are readily-available input sources (a sequence of depth images, or LIDAR scans). It is part of the dist function in the stats package. Triangle Inequality for $\|x\|_{\infty}$ 1. Then we look at some special cases of Minkowski distance. Thus the Hamming distance comes out to be 3. Über die selektive Reflexion der Quecksilberlinie 2563,7 Å.-E. am Quecksilberdampf / von Rudolf Minkowski und Erich Pollähn. reaching infinity, we obtain the Chebyshev distance: Similarly, for The Minkowski distance has nothing to do with the pheatmap package. The Minkowski distance between 1-D arrays u and v, is defined as Hermann Minkowski (/ m ɪ ŋ ˈ k ɔː f s k i,-ˈ k ɒ f-/; German: [mɪŋˈkɔfski]; 22 June 1864 – 12 January 1909) was a German mathematician of Polish-Jewish descent and professor at Königsberg, Zürich and Göttingen.He created and developed the geometry of numbers and used geometrical methods to solve problems in number theory, mathematical physics, and the theory of relativity. The Minkowski distance (e.g. If p = 1, we call L1 norm, they also call Manhattan or city block distance define this formula. For the default method, a "dist" object, or a matrix (of distances) or an object which can be coerced to such a matrix using as.matrix(). Here generalized means that we can manipulate the above formula to calculate the distance between two data points in different ways. The Minkowski distance is a distance measure that generalizes a wide range of distances such as the Hamming and the Euclidean distance. Synonyms are L1-Norm, Taxicab or City-Block distance. {\displaystyle p\geq 1} Then we look at some special cases of Minkowski distance. This distance is calculated with the help of the dist function of the proxy package. Cluster analysis using R, Data Mining course. it is not a metric. {\displaystyle p} The computed distance between the pair of series. Minkowski distance Objective. Minkowski distance with Missing Values. {\displaystyle p} The metric signature of Minkowski spacetime is represented as (-+++) or (+—) and it is always flat. p To calculate the Minkowski distance between vectors in R, we can use the built-in dist() function with the following syntax: d ist(x, method=”minkowski”, p) where: x: A numeric matrix or data frame. Then if … Quite conveniently, the penetration vector is simply the minimum distance from the origin to the Minkowski-differenced resultant AABB, as shown below: The penetration vector is the vector that you can apply to one AABB to make sure it leaves the other. The corresponding matrix or data.frame should store probability density functions (as rows) for which distance computations should be performed. The Minkowski metric is widely used for measuring similarity between objects (e.g., images) [13]. Minkowski distance with Missing Values. School Saudi Electronic University; Course Title IT 446; Uploaded By majeedasa123. {\displaystyle p<1} The power of the Minkowski distance. Triangle Inequality for $\|x\|_{\infty}$ 1. The power of the Minkowski distance. Given two or more vectors, find distance similarity of these vectors. Limits of the Minkowski distance as related to the generalized mean. version 0.4-14. http://CRAN.R-project.org/package=proxy. Jump to: General, Art, Business, Computing, Medicine, Miscellaneous, Religion, Science, Slang, Sports, Tech, Phrases We found one dictionary with English definitions that includes the word minkowski distance function: Click on the first link on a line below to go directly to a page where "minkowski distance function" is defined. Minkowski distance is used for distance similarity of vector. In particular, if we are dealing with binary vectors we call these Hamming distance is the number of bits that are different. 0. limits as number of points going to infinity at minkowski distance… Note that either of X and Y can be just a single vector -- then the colwise function will compute the distance between this vector and each column of the other parameter. (Only the lower triangle of … The formula for Minkowski distance is: D(x,y) = p √Σ d |x d – y d | p As mentioned above, we can manipulate the value of p and calculate the distance in three different ways- p = 1, Manhattan Distance We can calculate Minkowski distance only in a normed vector space, which means in a space where distances can be represented as a vector that has a length and the lengths cannot be negative. My question is with different p, I want to plot the distance with different p to get graphs like below. When Minkowski distance. 'cityblock' City block distance. All the reference frames in Minkowski spacetime agrees on the overall distance in the spacetime between the events, this is because it treats the 4th dimension (time) differently than the 3 spatial dimensions. David Meyer and Christian Buchta (2015). Skorpione, Pedipalpen und Solifugen der Zweiten Deutschen Zentral-Afrika-Expedition 1910-1911. Note that either of X and Y can be just a single vector -- then the colwise function will compute the distance between this vector and each column of the other parameter. This metric can be considered a generalisation of both the Euclidean and Manhattan distance. Limits of the Minkowski distance as related to the generalized mean. Although theoretically infinite measures exist by varying the order of the equation just three have gained importance. Compute the Minkowski distance of order 3 for the first 10 records of mnist_sample and store them in an object named distances_3. In mathematical physics, Minkowski space (or Minkowski spacetime) (/ m ɪ ŋ ˈ k ɔː f s k i,-ˈ k ɒ f-/) is a combination of three-dimensional Euclidean space and time into a four-dimensional manifold where the spacetime interval between any two events is independent of the inertial frame of reference in which they are recorded. Firstly let’s prepare a small dataset to work with: # set seed to make example reproducible set.seed(123) test <- data.frame(x=sample(1:10000,7), y=sample(1:10000,7), z=sample(1:10000,7)) test x y z 1 2876 8925 1030 2 7883 5514 8998 3 4089 4566 2461 4 8828 9566 421 5 9401 4532 3278 6 456 6773 9541 7 … As mentioned above, we use Minkowski distance formula to find Manhattan distance by setting p’s value as 1. 5. What is the relationship between the distances obtained from the Minkowski distance measures when r=1, r= view the full answer. By varying the order of the dist function of the dist function of the just..., to calculate the distance between 1-D arrays u and v, is defined then... Of 59 pages excluded from all points distance, d, between data... Get the distance between 1-D arrays u and v, is defined then! Minkowski 's inequality for $\|x\|_ { \infty }$ 1 p norm Definition CGAL: Town Of Labrador City, Ffxiv Gold Chocobo Feather, When Is Christmas In Greece, Family Guy Stewart, Abc Schedule Tonight,
2021-03-06 10:42:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7914875745773315, "perplexity": 849.4897255368678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374686.69/warc/CC-MAIN-20210306100836-20210306130836-00051.warc.gz"}
https://www.thejournal.club/c/paper/332360/
Sampling from the low temperature Potts model through a Markov chain on flows Jeroen Huijben, Viresh Patel, Guus Regts In this paper we consider the algorithmic problem of sampling from the Potts model and computing its partition function at low temperatures. Instead of directly working with spin configurations, we consider the equivalent problem of sampling flows. We show, using path coupling, that a simple and natural Markov chain on the set of flows is rapidly mixing. As a result we find a $\delta$-approximate sampling algorithm for the Potts model at low enough temperatures, whose running time is bounded by $O(m^2\log(m\delta^{-1}))$ for graphs $G$ with $m$ edges. arrow_drop_up
2021-07-25 12:30:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6763949394226074, "perplexity": 402.675524518739}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151672.96/warc/CC-MAIN-20210725111913-20210725141913-00717.warc.gz"}
http://www.numdam.org/item/AIHPC_2009__26_1_181_0/
A Stochastic Lagrangian Proof of Global Existence of the Navier-Stokes Equations for Flows With Small Reynolds Number Annales de l'I.H.P. Analyse non linéaire, Volume 26 (2009) no. 1, p. 181-189 @article{AIHPC_2009__26_1_181_0, author = {Iyer, Gautam}, title = {A Stochastic Lagrangian Proof of Global Existence of the Navier-Stokes Equations for Flows With Small Reynolds Number}, journal = {Annales de l'I.H.P. Analyse non lin\'eaire}, publisher = {Elsevier}, volume = {26}, number = {1}, year = {2009}, pages = {181-189}, doi = {10.1016/j.anihpc.2007.10.003}, zbl = {1156.76019}, mrnumber = {2483818}, language = {en}, url = {http://www.numdam.org/item/AIHPC_2009__26_1_181_0} } Iyer, Gautam. A Stochastic Lagrangian Proof of Global Existence of the Navier-Stokes Equations for Flows With Small Reynolds Number. Annales de l'I.H.P. Analyse non linéaire, Volume 26 (2009) no. 1, pp. 181-189. doi : 10.1016/j.anihpc.2007.10.003. http://www.numdam.org/item/AIHPC_2009__26_1_181_0/ [1] Calderón A. P., Zygmund A., Singular Integrals and Periodic Functions, Studia Math. 14 (1954) 249-271, (1955). | MR 69310 | Zbl 0064.10401 [2] Chorin A. J., Marsden J. E., A Mathematical Introduction to Fluid Mechanics, Texts in Applied Mathematics, vol. 4, third ed., Springer-Verlag, New York, 1993. | Zbl 0774.76001 [3] Constantin P., Foias C., Navier-Stokes Equations, Chicago Lectures in Mathematics, University of Chicago Press, Chicago, IL, 1988. | MR 972259 | Zbl 0687.35071 [4] Constantin P., An Eulerian-Lagrangian Approach for Incompressible Fluids: Local Theory, J. Amer. Math. Soc. 14 (2) (2001) 263-278, (electronic). | MR 1815212 | Zbl 0997.76009 [5] Constantin P., Iyer G., A Stochastic Lagrangian Representation of the 3-Dimensional Incompressible Navier-Stokes Equations, Comm. Pure Appl. Math. (2006), in press, available at, arXiv:math.PR/0511067. | Zbl 1156.60048 [6] Constantin P., Iyer G., Stochastic Lagrangian Transport and Generalized Relative Entropies, Commun. Math. Sci. 4 (4) (2006) 767-777. | MR 2264819 | Zbl 1120.35046 [7] Constantin P., Kiselev A., Ryzhik L., Zlatoŝ A., Diffusion and Mixing in Fluid Flow, Ann. Math. (2006), in press, available at arXiv:, math.AP/0509663. | MR 2434887 | Zbl 1180.35084 [8] Kato T., Fujita H., On the Nonstationary Navier-Stokes System, Rend. Sem. Mat. Univ. Padova 32 (1962) 243-260. | Numdam | MR 142928 | Zbl 0114.05002 [9] Fannjiang A., Kiselev A., Ryzhik L., Quenching of Reaction by Cellular Flows, Geom. Funct. Anal. 16 (1) (2006) 40-69. | MR 2221252 | Zbl 1097.35077 [10] Fefferman C. L., Existence and Smoothness of the Navier-Stokes Equation, in: The Millennium Prize Problems, Clay Math. Inst., Cambridge, MA, 2006, pp. 57-67. | MR 2238274 [11] Freidlin M., Functional Integration and Partial Differential Equations, Annals of Mathematics Studies, vol. 109, Princeton University Press, Princeton, NJ, 1985. | MR 833742 | Zbl 0568.60057 [12] Friedman A., Stochastic Differential Equations and Applications. Vol. 1, Probability and Mathematical Statistics, vol. 28, Academic Press Harcourt Brace Jovanovich Publishers, New York, 1975. | MR 494490 | Zbl 0323.60056 [13] Gallagher I., Chemin J.-Y., On the Global Wellposedness of the 3-D Navier-Stokes Equations With Large Initial Data, Preprint, available at arXiv:, math.AP/0508374. | MR 2290141 [14] Iyer G., A Stochastic Perturbation of Inviscid Flows, Commun. Math. Phys. 266 (3) (2006) 631-645, available at arXiv:, math.AP/0505066. | MR 2238892 | Zbl 1127.76017 [15] G. Iyer, A stochastic Lagrangian formulation of the Navier-Stokes and related transport equations, Ph.D. Thesis, University of Chicago, 2006. [16] Karatzas I., Shreve S. E., Brownian Motion and Stochastic Calculus, Graduate Texts in Mathematics, vol. 113, second ed., Springer-Verlag, New York, 1991. | MR 1121940 | Zbl 0734.60060 [17] Koch H., Tataru D., Well-Posedness for the Navier-Stokes Equations, Adv. Math. 157 (1) (2001) 22-35. | MR 1808843 | Zbl 0972.35084 [18] Krylov N. V., Lectures on Elliptic and Parabolic Equations in Hölder Spaces, Graduate Studies in Mathematics, vol. 12, American Mathematical Society, Providence, RI, 1996. | MR 1406091 | Zbl 0865.35001 [19] Krylov N. V., Rozovskiĭ B. L., Stochastic Partial Differential Equations and Diffusion Processes, Uspekhi Mat. Nauk 37 (6(228)) (1982) 75-95, (in Russian). | MR 683274 | Zbl 0508.60054 [20] Kunita H., Stochastic Flows and Stochastic Differential Equations, Cambridge Studies in Advanced Mathematics, vol. 24, Cambridge University Press, Cambridge, 1997, Reprint of the 1990 original. | MR 1472487 | Zbl 0743.60052 [21] Rozovskiĭ B. L., Stochastic Evolution Systems: Linear Theory and Applications to Nonlinear Filtering, Mathematics and its Applications (Soviet Series), vol. 35, Kluwer Academic Publishers Group, Dordrecht, 1990, Translated from the Russian by A. Yarkho. | MR 1135324 | Zbl 0724.60070 [22] Stein E. M., Singular Integrals and Differentiability Properties of Functions, Princeton Mathematical Series, vol. 30, Princeton University Press, Princeton, NJ, 1970. | MR 290095 | Zbl 0207.13501
2020-01-17 23:14:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4594920873641968, "perplexity": 2251.004790122631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250591234.15/warc/CC-MAIN-20200117205732-20200117233732-00037.warc.gz"}
https://marktheballot.blogspot.com/2020/01/polls-bias-variance-trade-off-and-2019.html
## Monday, January 20, 2020 ### Polls, the bias-variance trade-off and the 2019 Election Data scientists typically want to minimise the error associated with their predictions. Certainly, pollsters want their opinion poll estimates immediately prior to an election to reflect the actual election outcome as closely as possible. However, this is no easy task. With many predictive models, there is often a trade-off between the error associated with bias and the error associated with variance. In data science, and particularly in the domain of machine learning, this phenomenon is known as the bias-variance trade-off or the bias-variance dilemma. According to the bias-variance trade-off model, less than optimally tuned predictive models often either deliver high bias and low variance predictions, or the opposite: low bias and high variance predictions. At this stage, a chart might help with your intuition, and I will take a little time to define what I mean by bias and variance. ### Two types of prediction error: bias and variance Bias tells us how closely our predictions are typically, or on average, aligned with the true value for the population or the actual outcome. In terms of the above bullseye diagram, how closely do our predictions on average land in the very centre of the bullseye? Variance tells us about the variability in our predictions. It tells us how tightly our individual predictions cluster. In other words, variance indicates how far and wide our predictions are spread. Do they span a small range or a large range? In our 2x2 bullseye grid above, a predictive model (or in the case of the 2019 opinion polls a system of predictive models) can deliver predictions with high or low variance and high or low bias. ### In 2019, the opinion polls exhibited high bias and low variance My contention is that collectively, the opinion polling in the six-week lead-up to the 2019 Australian Federal election exhibited high bias and low variance. It was in the top lefthand quadrant of the bulls-eye diagram above. As evidence for this contention, I would note: Kevin Boneham has argued that by Australian standards, the opinion polls leading up to the 2019 election had their biggest miss since the mid-1980s. Also, I have previously commented on the extraordinarily low variance among the opinions polls prior to the 2019 election. ### Changes in statistical practice and social realities Before we come to the implications of the high bias/low variance polling outcome in 2019, it is worth reflecting how polling practice has been driven to change in recent times. In particular, I want to highlight the increased importance of data analytics in the production of opinion poll estimates. Newspoll was famous for introducing an era of reliable and accurate polling in the Australian context. With the near-universal penetration of land-line telephones to Australian households by the 1980s, pollsters were able to develop sample frames that came close to the Holy Grail of giving every member of the Australian voting public an equal chance of being included in any survey the pollster run. While pollsters came close, their sampling frames were not perfect, and weightings and adjustments were still needed for under-represented groups. Nonetheless, the extent to which weightings and analytics were needed to make up for shortfalls in the sample frame was small compared with today. So what has changed since the mid-1980s? Quite a few things, here are but a few: • the general population use of landlines has declined. In some cases, the plain old telephone system (POTS) has been replaced by a voice over internet protocol (VOIP) service. Many households have terminated their landline service altogether, in favour of the mobile phone. Younger people in share houses are very unlikely to have a shared telephone landline or even a VOIP service. Further, mobile and VOIP numbers are not listed in the White Pages by default. As a consequence, constructing a telephone-based sample frame is much more challenging than it once was. • the rise of the digital technologies has seen a growth in robocalls, which many people find annoying. Caller identification on phones has given rise to the practice of call screening. If a polling call is taken, busy people and those not interested in politics often just hang up. As a result of these and other factors, participation rates in telephone polls have plummeted. In the United States, Pew Research reported a decline in telephone survey responses from 36 per cent in 1997 to just 6 per cent in 2018. Newspoll's decision to abandon telephone polling suggests something similar may have happened in Australia • the newspapers that purchase much of the publicly available opinion polling have had their budgets squeezed, and they are wanting to spend less on polling. Ironically, lower telephone survey response rates (as noted above) are driving up the costs of telephone surveys. As a consequence, pollsters have had to turn to lower-cost (and more challenging) sampling techniques, such as internet surveys. • While more people may be online these days (as Newspoll argues), online participation remains less widespread than landlines in the mid-1980s. Because of the above, pollsters need to work much harder to address the short-comings in their sampling frame. Pollsters need to make far more data-driven adjustments to today's raw poll results than happened in the mid-1980s. This work is far more complex than simply weighting for sub-cohort population shares. Which brings us back to the bias-variance trade-off. ### What the bias-variance trade-off outcome suggests The error for a predictive model can be thought of as the sum of the squared differences between the true value and all of the predicted values from the model $$Error = \sum(predicted_y - true_y)^2$$ The critical insight from the bias-variance trade-off is that this error can be decomposed into three components: $$Error = Bias^2 + Variance + Irreducible Error$$ We have already discussed bias and variance. The irreducible error is that error that cannot be removed by developing a better predictive model. All predictive models will have some noise-related error. Simple models tend to underfit the data. They do not capture enough of the information or "signal" available in the data. As a result, they tend to have low variance, but high bias. Complex models tend to overfit the data. Because they typically have many parameters, they tend to treat noise in the data as the signal. This tends to result in predictive models that have low bias but high variance (driven by the noise in the data). Again, to help develop our intuition (and to avoid the complex mathematics), let's turn to a visual aid to get a sense of what I mean by under-fit and over-fit a predictive model to the data. The data below (the red dots) comes from a randomly perturbed sine wave between 0 and $\pi$. I have three models that look at those red dots (the perturbed data) and try to predict the form or shape of the underlying curve: • a simple linear regression - which predicts a straight line • a quadratic regression - which predicts a parabola • a high-order polynomial regression - which can predict quite a complicated line The linear regression (the straight blue line, top left) provides an under-fitted model for the curve evident in the data. The quadratic regression (the inverted parabola, bottom left) provides a reasonably good model. It is the closest of the models to a sine wave between the values of 0 and $\pi$. And the polynomial regression of degree 25 (the squiggly blue line, bottom right) over-fits the data and reflects some of the random noise introduced by the perturbation process. The next chart compares the three blue models above, with the actual sine wave before the perturbation. The closer the blue line is to the red line, the less error the model has. The model with the least error between the predicted line and the sine curve is the parabola. The straight line and the squiggly line both have differences with the actual sine curve for most of their length. When we run these three predictive models many times we can see something about the type of error for these three models. In the next chart, we have generated a randomly perturbed set of points 1000 times (just like we did above) and asked each of the models to predict the underlying line 1000 times. Each of the predictions from the 1000 model runs has then been plotted on the next chart. For reference, the sine curve the models are trying to predict is over-plotted in light-grey. Looking at the multiple runs, the bias and variance for the distribution of predictions from the three models become clear: • If we look at the blue linear model in the top left, we see high bias (the straight-line predictions from this model have significant points of deviation from the sine curve). But the 1000 model predictions are tightly clustered showing low variance across the predictions. The distribution of predictions from the under-fitted model shows high bias and low variance. • The green quadratic model in the bottom left exhibits low bias. The 1000 model predictions follow closely to the sine curve. They are also tightly clustered showing low variance. The distribution of predictions from the optimally-fitted model shows low bias and low variance. These predictions have the least total error. • The red high-order polynomial model in the bottom right exhibits low bias: on average, the 1000 predictions are close to the sine curve. However, the multiple runs exhibit quite a spread of predictions compared with the other two models. The distribution of predictions from the over-fitted model shows a low bias, but high variance. Coming back to the three-component cumulative error equation above, we can see the least total error occurs at an optimal point between a simple and complex model. Again, let's use a stylised chart to aid intuition. As model complexity increases, the prediction error associated with bias decreases, but the prediction error from variance increases. This minimal total error sweet-spot (between simplicity and complexity) is the key insight of the bias-variance trade-off model. This is the best trade-off between the error from prediction bias and the error from prediction variance. ### So what does this mean for the polls before the 2019 election? The high bias and low variance in the 2019 poll results suggest that collectively the pollsters had not done enough to ensure their polling models produced optimal election outcome predictions. In short, their predictive models were not complex enough to adequately overcome the sampling frame problems (noted above) that they were wrestling with. Without transparency from the pollsters, it is hard to know specifically where the pollsters' approach lacked the necessary complexity. Nonetheless, I have wondered to what extent the pollsters use each other (either purposefully or through confirmation bias) to benchmark their predictions and adjust for the (perceived) bias that arises from issues with their sampling frames. In effect, mutual cross-validation, should it be occurring, would effectively simplify their collective models. I have also wondered whether the pollsters look at non-linearity effects between their sample frame and changes in voting intention in the Australian public. If the samples that the pollsters are working with are more educated and more rusted on to a political party on average, their response to changing community sentiment may be muted compared with the broader community. For example, a predictive model that has been validated when community sentiment is at (say) 47 or 48 per cent, may not be reliable when community sentiment moves to 51 per cent. I have wondered whether the pollsters built rolling averages or some other smoothing technique into their predictive models. Again, if so, this could be a source of simplicity. I have wondered whether the pollsters are cycling through the same survey respondents over and over again. Again, if so, this could be a source of simplicity. I have many questions as to what might have gone wrong in 2019. The bias-variance trade-off model provides some suggestion as to what the nature of the problem might have been.
2022-01-22 20:19:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5617585182189941, "perplexity": 1087.053555928146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303884.44/warc/CC-MAIN-20220122194730-20220122224730-00344.warc.gz"}
https://tex.stackexchange.com/questions/339305/how-to-get-default-formatting-for-chapter-section-etc?noredirect=1
# How to get default formatting for \chapter, \section, etc.? I'd like to use titlesec to make some modifications to section and chapter headings, but titleformat redefines the formatting entirely instead of just making modifications (e.g. if I just want to change the colour). How do I get titlesec to modify instead of entirely redefine? Is this possible? If not, how do I figure out what is the current formatting of \chapter, \section, etc. so that I can replicate it and make my modifications? In response the request for a sample document, I guess the most basic is: \documentclass{article} \begin{document} \section{Hello World} Hello world! \end{document} However, I'm interested in a way to find out (e.g. to print out) the current formatting of \section instead of simply "knowing" it for a particular configuration. • Can you give a short complete document showing your setup (like document class and style files)? What you consider as the default depends on such things. – gernot Nov 15 '16 at 15:24 • You could switch to KOMAscript (in your case \documentclass{scrartcl}) and then use the \addtokomafont{}{} command to change single attributes of the fonts of e.g. headings. – Manuel Weinkauf Nov 15 '16 at 15:32 • Check the documentation of titlesec, section 9.2. Standard Classes. – Arash Esbati Nov 15 '16 at 15:33 For simple modifications such as the title colour, you can use the light version of \titleformat, but have to check in article.cls (or report, or book) what the values of the parameters are (fontsize, weight, shape). Here is an example: \documentclass[a4paper]{article} \usepackage[svgnames]{xcolor} \usepackage{titlesec} \titleformat*{\section}{\color{IndianRed}\normalfont\bfseries\Large} \begin{document} \section{A short title} This is a paragraph. This is a paragraph. This is a paragraph. This is a paragraph. This is a paragraph. This is a paragraph. This is a paragraph. This is a paragraph. \end{document} The standard classes are defined as follows (copied from titlesec reference, section 9.2) \titleformat{\chapter}[display] {\normalfont\huge\bfseries}{\chaptertitlename\ \thechapter}{20pt}{\Huge} \titleformat{\section} {\normalfont\Large\bfseries}{\thesection}{1em}{} \titleformat{\subsection} {\normalfont\large\bfseries}{\thesubsection}{1em}{} \titleformat{\subsubsection} {\normalfont\normalsize\bfseries}{\thesubsubsection}{1em}{} \titleformat{\paragraph}[runin] {\normalfont\normalsize\bfseries}{\theparagraph}{1em}{} \titleformat{\subparagraph}[runin] {\normalfont\normalsize\bfseries}{\thesubparagraph}{1em}{} \titlespacing*{\chapter} {0pt}{50pt}{40pt} \titlespacing*{\section} {0pt}{3.5ex plus 1ex minus .2ex}{2.3ex plus .2ex} \titlespacing*{\subsection} {0pt}{3.25ex plus 1ex minus .2ex}{1.5ex plus .2ex} \titlespacing*{\subsubsection}{0pt}{3.25ex plus 1ex minus .2ex}{1.5ex plus .2ex} \titlespacing*{\paragraph} {0pt}{3.25ex plus 1ex minus .2ex}{1em} \titlespacing*{\subparagraph} {\parindent}{3.25ex plus 1ex minus .2ex}{1em} Try that \documentclass{scrartcl} \usepackage{xcolor} • scrartcl does not know \chapter. Using the KOMA-Script command \RedeclareSectionCommand it is also possible to modify attributes like beforeskip, afterskip, indent. The default settings for these attributes can be found in the KOMA-Script documentation. – esdd Nov 15 '16 at 15:53
2019-06-26 14:48:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7741612195968628, "perplexity": 3774.725830948326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000353.82/warc/CC-MAIN-20190626134339-20190626160339-00302.warc.gz"}
https://www.semanticscholar.org/paper/Compensation-temperature-of-3d-mixed-ternary-alloy-Kis-Cam-Aydiner/ca0045aa2857dd22393f0ab2334da705d5929755
# Compensation temperature of 3d mixed ferro-ferrimagnetic ternary alloy @article{KisCam2010CompensationTO, title={Compensation temperature of 3d mixed ferro-ferrimagnetic ternary alloy}, author={Ebru Kis-Cam and Ekrem Aydiner}, journal={Journal of Magnetism and Magnetic Materials}, year={2010}, volume={322}, pages={1706-1709} } • Published 31 October 2009 • Materials Science, Physics • Journal of Magnetism and Magnetic Materials 20 Citations ## Figures from this paper ### Dynamic magnetic features of a mixed ferro-ferrimagnetic ternary alloy in the form of ABpC1−p • Materials Science The European Physical Journal Plus • 2021 In this work, we studied the dynamic magnetic features of a mixed ferro-ferrimagnetic ternary alloy in the form of AB p C 1− p . We also investigated the effect of Hamiltonian parameters on the ### The mixed‐spin ferro–ferrimagnetic ternary alloy The ${\rm AB}_{p} {\rm C}_{1{-} p}$ type of mixed‐spin ferro–ferrimagnetic ternary‐alloy with the spins SA = 1, SB = 3/2, and SC = 1/2 is investigated on the Bethe lattice on which the A ions are ## References SHOWING 1-10 OF 24 REFERENCES ### Interacting domain walls in an easy-plane ferromagnet • Materials Science • 2002 Structural, magnetic, and transport properties in La0.7Ca0.3Mn1-xScxO3 have been experimentally investigated. A sensitive response on the magnetic and transport properties to Sc substituting for Mn ### Electrochemically Tunable Magnetic Phase Transition in a High-Tc Chromium Cyanide Thin Film • Materials Science Science • 1996 Molecular-based ferrimagnetic thin films with high critical temperatures (Tc) composed of mixed-valence chromium cyanides were synthesized by means of a simple electrochemical route. The highest Tc ### DESIGN OF A NOVEL MAGNET EXHIBITING PHOTOINDUCED MAGNETIC POLE INVERSION BASED ON MOLECULAR FIELD THEORY • Chemistry • 1999 We show a novel magnetic phenomenon, “photoinduced magnetic pole inversion”, which occurs even in the absence of an external magnetic field. The key of this strategy is to control the compensation ### Photoinduced magnetic pole inversion in a ferro-ferrimagnet: (Fe0.40IIMn0.60II) 1.5CrIII(CN)6 • Physics • 1997 We tried to design the magnet exhibiting magnetic pole (N and S) inversion by photostimuli. The magnetization of Fe1.5IICrIII(CN)6⋅7.5H2O was changed in a photon mode by visible light. A ### Fabrication, Structure, and Magnetic Properties of Highly Ordered Prussian Blue Nanowire Arrays • Materials Science, Chemistry • 2002 Highly ordered Prussian blue nanowire arrays with diameters of about 50 nm and lengths up to 4μm have been fabricated for the first time by an electrodepositing technology with two-step anodizing
2022-10-02 18:25:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6393051147460938, "perplexity": 8582.26436038258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00144.warc.gz"}
https://tex.stackexchange.com/questions/370461/representing-conways-game-of-life?noredirect=1
# Representing Conway's Game of Life [duplicate] I'm writing an article about Conway's Game of Life, and I'm using the Springer LNCS template. In my article, I must sometimes show some game of life patterns, for example the following: To generate such pattern, I used the Tikz package. The code (without other formatting commands) is the following: \begin{tikzpicture} \fill[black] (1,4) rectangle (3,6); \fill[black] (11,3) rectangle (12,6); \fill[black] (12,2) rectangle (13,3); \fill[black] (12,6) rectangle (13,7); \fill[black] (13,1) rectangle (15,2); \fill[black] (13,7) rectangle (15,8); \fill[black] (15,4) rectangle (16,5); \fill[black] (16,2) rectangle (17,3); \fill[black] (16,6) rectangle (17,7); \fill[black] (17,3) rectangle (18,6); \fill[black] (18,4) rectangle (19,5); \fill[black] (21,5) rectangle (23,8); \fill[black] (23,4) rectangle (24,5); \fill[black] (23,8) rectangle (24,9); \fill[black] (25,3) rectangle (26,5); \fill[black] (25,8) rectangle (26,10); \fill[black] (35,6) rectangle (37,8); \draw[step=1cm,gray,thick] (0,0) grid (38,11); \end{tikzpicture} As you can see, this is pretty bad, expecially as I must now produce dozen of such patterns, and manually specifying the coordinates of every black cells is... well, not viable. The question is: is there a way to achieve the same result as above, but using (for example) a matrix-style input? What I mean is something like: \sort-of-matrix-command { W & W & W & W & W & B \\ W & B & W & W & W & W \\ B & W & B & B & W & W \\ W & W & B & B & B & W \\ B & B & W & W & W & W \\ W & W & W & W & W & W \\ } where "W" or "B" stand for white cell and black cell respectively. If it is not possible, any other way which is faster than the one I'm currently using is well accepted.
2020-07-09 08:30:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6107041239738464, "perplexity": 1148.106828065083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655899209.48/warc/CC-MAIN-20200709065456-20200709095456-00361.warc.gz"}
https://socratic.org/questions/what-is-the-acceleration-due-to-gravity-on-the-surface-of-a-planet-that-has-twic
# What is the acceleration due to gravity on the surface of a planet that has twice the mass of the Earth and half its radius? Then teach the underlying concepts Don't copy without citing sources preview ? Write a one sentence answer... #### Explanation Explain in detail... #### Explanation: I want someone to double check my answer Describe your changes (optional) 200 3 May 20, 2016 The mass increases linearly but radius decreases exponentially, so the result is $9.8 \frac{m}{s} ^ 2 \cdot 2 \cdot 4 = 78.4 \frac{m}{s} ^ 2$ #### Explanation: Let's first look at the equation for the force of gravity: ${F}_{g} = G \frac{{m}_{1} {m}_{2}}{r} ^ 2$ which is often simplified for working with objects on the surface of the Earth (since we know the gravitational constant and the mass of the Earth) to ${F}_{g} = \frac{M}{r} ^ 2$ where M is the mass experiencing Earth's gravity. So what happens when we double the mass of the Earth and reduce its radius to 1/2? Let's multiply ${m}_{1}$ by 2 and substitute in $\frac{1}{2} r$ for $r$. So first start with the full equation: ${F}_{g} = G \frac{{m}_{1} {m}_{2}}{r} ^ 2$ then make the substitutions: ${F}_{g} = G \frac{\left(2 {m}_{1}\right) {m}_{2}}{\frac{1}{2} r} ^ 2$ ${F}_{g} = G \frac{\left(2 {m}_{1}\right) {m}_{2}}{\frac{1}{4} r}$ So the numerator increases linearly ($\times 2$ ) but the denominator reduces by an exponential - in this case ($\times 4$ ). The force of gravity on Earth is roughly $9.8 \frac{m}{s} ^ 2$ but on this other planet, it would be: $9.8 \frac{m}{s} ^ 2 \cdot 2 \cdot 4 = 78.4 \frac{m}{s} ^ 2$ ##### Just asked! See more • 9 minutes ago • 11 minutes ago • 11 minutes ago • 12 minutes ago • A minute ago • A minute ago • 3 minutes ago • 5 minutes ago • 5 minutes ago • 7 minutes ago • 9 minutes ago • 11 minutes ago • 11 minutes ago • 12 minutes ago
2018-03-24 18:01:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 13, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8558434247970581, "perplexity": 1213.5142339297029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650764.71/warc/CC-MAIN-20180324171404-20180324191404-00493.warc.gz"}
https://fbclagrange.org/2020/03/03/devotions-200303/
(706) 884-5631 info@fbclagrange.org Select Page Jesus, when he began his ministry, was about thirty years of age, being the son (as was supposed) of Joseph, the son of Heli, the son of Matthat, the son of Levi, the son of Melchi, the son of Jannai, the son of Joseph, the son of Mattathias, the son of Amos, the son of Nahum, the son of Esli, the son of Naggai, the son of Maath, the son of Mattathias, the son of Semein, the son of Josech, the son of Joda, the son of Joanan, the son of Rhesa, the son of Zerubbabel, the son of Shealtiel, the son of Neri, the son of Melchi, the son of Addi, the son of Cosam, the son of Elmadam, the son of Er, the son of Joshua, the son of Eliezer, the son of Jorim, the son of Matthat, the son of Levi, the son of Simeon, the son of Judah, the son of Joseph, the son of Jonam, the son of Eliakim, the son of Melea, the son of Menna, the son of Mattatha, the son of Nathan, the son of David, the son of Jesse, the son of Obed, the son of Boaz, the son of Sala, the son of Nahshon, the son of Amminadab, the son of Admin, the son of Arni, the son of Hezron, the son of Perez, the son of Judah, the son of Jacob, the son of Isaac, the son of Abraham, the son of Terah, the son of Nahor, the son of Serug, the son of Reu, the son of Peleg, the son of Eber, the son of Shelah, the son of Cainan, the son of Arphaxad, the son of Shem, the son of Noah, the son of Lamech, the son of Methuselah, the son of Enoch, the son of Jared, the son of Mahalaleel, the son of Cainan, the son of Enos, the son of Seth, the son of Adam, the son of God. Luke 3:23-38 ESV Genealogies are part of God’s perfect Word to us. They usually serve to make a specific point and are typically geared toward the original audience. In Luke’s gospel, he linked Jesus all the way back to Adam, thus portraying Jesus as the Messiah for all of humankind, not just the Jews. Jesus is not only the Son of David but also the Son of Adam, indeed the last Adam, who stands related to all humankind. All of us know someone who needs to hear about Jesus. Is it someone you work with? Someone you go to school with? How about a neighbor or a friend? Who in your life needs to hear about the truth of who Jesus is, that He is the Messiah, Lord of all? Write that person’s name on your heart right now. God has put that person in your life for a special purpose, that you tell him or her about His saving grace. This person is in your mission field. What steps can you take today or this week to have a conversation with your “one” for the purpose of telling them about Jesus? Pray right now for the opportunity to tell your “one” about Jesus as Messiah.
2023-03-30 23:56:19
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8484065532684326, "perplexity": 5557.574292673641}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949506.62/warc/CC-MAIN-20230330225648-20230331015648-00583.warc.gz"}
http://mymathforum.com/real-analysis/43054-measure-theory.html
My Math Forum measure theory Real Analysis Real Analysis Math Forum April 20th, 2014, 01:40 PM #1 Newbie   Joined: Oct 2013 Posts: 25 Thanks: 0 measure theory Let f be a measurable function. Assume that lim λm({x|f(x)>λ}) exists and is finite as λ tends to infinite Here m is the Lebesgue measure in R Does this imply that ∫|f|dm is finite?? Last edited by James1973; April 20th, 2014 at 01:48 PM. April 20th, 2014, 01:57 PM #2 Global Moderator   Joined: May 2007 Posts: 6,683 Thanks: 658 No - example: f(x) = |1/x|. Even if you exclude an interval around the origin (function is bounded), the integral still diverges. Thanks from James1973 April 20th, 2014, 02:08 PM   #3 Newbie Joined: Oct 2013 Posts: 25 Thanks: 0 Quote: Originally Posted by mathman No - example: f(x) = |1/x|. Even if you exclude an interval around the origin (function is bounded), the integral still diverges. Thanks for your example. But I cannot convince myself to understand the measure of your case here. I mean how to consider the measure m({x|1/x>λ}) and limit of λ m({x|1/x>λ}) April 21st, 2014, 01:50 PM #4 Global Moderator   Joined: May 2007 Posts: 6,683 Thanks: 658 Consider interval (0,∞) - sufficient because of symmetry. Let λ = 1/y. x < y => 1/x > 1/y = λ. Therefore λ m({x|1/x>λ}) = λy = 1. If you exclude an interval around the origin, then 1/x is bounded and m({x|1/x>λ}) = 0 for large enough λ. Tags measure, theory Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post Jeh Real Analysis 0 July 19th, 2012 10:13 PM stephanie_unizh Real Analysis 1 February 2nd, 2012 01:48 PM ANANYA Real Analysis 10 June 25th, 2010 02:57 PM Real Analysis 1 April 14th, 2009 05:00 PM sriram Real Analysis 0 June 16th, 2007 06:17 AM Contact - Home - Forums - Cryptocurrency Forum - Top
2019-02-18 01:49:17
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8159632682800293, "perplexity": 4239.261398528653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484020.33/warc/CC-MAIN-20190218013525-20190218035525-00196.warc.gz"}
https://www.journaltocs.ac.uk/index.php?action=browse&subAction=subjects&publisherID=91&journalID=43692&pageb=1&userQueryID=&sort=&local_page=1&sorType=&sorCol=
Subjects -> BUILDING AND CONSTRUCTION (Total: 139 journals)     - BUILDING AND CONSTRUCTION (131 journals)    - CARPENTRY AND WOODWORK (8 journals) BUILDING AND CONSTRUCTION (131 journals) Showing 1 - 35 of 35 Journals sorted alphabetically A+BE : Architecture and the Built Environment       (Followers: 19) Academia : Architecture and Construction       (Followers: 2) ACI Structural Journal       (Followers: 20) Advances in Building Education       (Followers: 4) Advances in Building Energy Research       (Followers: 11) Anales de Edificación Asian Journal of Civil Engineering       (Followers: 2) Australasian Journal of Construction Economics and Building       (Followers: 8) Baltic Journal of Real Estate Economics and Construction Management       (Followers: 5) Bautechnik       (Followers: 1) Beton- und Stahlbetonbau       (Followers: 1) Building & Management       (Followers: 2) Building Acoustics       (Followers: 4) Building Services Engineering Research & Technology       (Followers: 3) Buildings       (Followers: 7) BUILT : International Journal of Building, Urban, Interior and Landscape Technology       (Followers: 2) Built Environment Inquiry Journal Built Environment Project and Asset Management       (Followers: 13) Built-Environment Sri Lanka Case Studies in Construction Materials       (Followers: 8) Cement       (Followers: 1) Cement and Concrete Composites       (Followers: 17) Cement and Concrete Research       (Followers: 17) Challenge Journal of Concrete Research Letters       (Followers: 4) Challenge Journal of Concrete Research Letters       (Followers: 3) Change Over Time       (Followers: 3) City, Culture and Society       (Followers: 23) Cityscape       (Followers: 10) Clay Technology Concreto y cemento. Investigación y desarrollo Construction Economics and Building       (Followers: 4) Construction Engineering       (Followers: 9) Construction Management and Economics       (Followers: 24) Construction Research and Innovation       (Followers: 4) Construction Robotics       (Followers: 4) Corporate Real Estate Journal       (Followers: 4) Dams and Reservoirs       (Followers: 3) Developments in the Built Environment Energy and Built Environment Engineering Project Organization Journal       (Followers: 6) Engineering, Construction and Architectural Management       (Followers: 11) Environment and Urbanization Asia       (Followers: 2) Facilities       (Followers: 4) Frontiers in Built Environment       (Followers: 1) FUTY Journal of the Environment Glass Structures & Engineering       (Followers: 1) HBRC Journal Housing and Society       (Followers: 6) HVAC&R Research Indoor and Built Environment       (Followers: 4) Informes de la Construcción Intelligent Buildings International       (Followers: 2) International Journal of Advanced Structural Engineering       (Followers: 25) International Journal of Air-Conditioning and Refrigeration       (Followers: 12) International Journal of Architectural Computing       (Followers: 5) International Journal of Built Environment and Sustainability       (Followers: 3) International Journal of Concrete Structures and Materials       (Followers: 9) International Journal of Construction Engineering and Management       (Followers: 9) International Journal of Construction Management       (Followers: 4) International Journal of Disaster Resilience in the Built Environment       (Followers: 4) International Journal of Housing Markets and Analysis       (Followers: 9) International Journal of Masonry Research and Innovation International Journal of Protective Structures       (Followers: 4) International Journal of River Basin Management International Journal of Structural Stability and Dynamics       (Followers: 7) International Journal of Sustainable Building Technology and Urban Development       (Followers: 11) International Journal of Sustainable Construction Engineering and Technology       (Followers: 7) International Journal of Sustainable Real Estate and Construction Economics       (Followers: 2) International Journal of the Built Environment and Asset Management       (Followers: 5) International Journal of Ventilation Journal for Education in the Built Environment       (Followers: 3) Journal of Aging and Environment       (Followers: 4) Journal of Architecture, Planning and Construction Management       (Followers: 11) Journal of Asian Architecture and Building Engineering Journal of Building Construction and Planning Research       (Followers: 10) Journal of Building Engineering       (Followers: 4) Journal of Building Materials and Structures       (Followers: 2) Journal of Building Pathology and Rehabilitation Journal of Building Performance Simulation       (Followers: 5) Journal of Civil Engineering and Construction Technology       (Followers: 14) Journal of Civil Engineering and Management       (Followers: 8) Journal of Computational Acoustics       (Followers: 5) Journal of Computing in Civil Engineering       (Followers: 21) Journal of Construction Business and Management       (Followers: 2) Journal of Construction Engineering       (Followers: 10) Journal of Construction Engineering, Technology & Management       (Followers: 6) Journal of Facilities Management       (Followers: 3) Journal of Green Building       (Followers: 10) Journal of Legal Affairs and Dispute Resolution in Engineering and Construction       (Followers: 4) Journal of Property, Planning and Environmental Law       (Followers: 5) Journal of Structural Fire Engineering       (Followers: 4) Journal of Sustainable Cement-Based Materials Journal of Sustainable Design and Applied Research in Innovative Engineering of the Built Environment       (Followers: 2) Journal of the South African Institution of Civil Engineering       (Followers: 2) Journal of Transport and Land Use       (Followers: 26) Journal of Urban Technology and Sustainability Landscape History       (Followers: 15) Materiales de Construcción       (Followers: 1) Mauerwerk Modular and Offsite Construction (MOC) Summit Proceedings | Naval Engineers Journal       (Followers: 1) Nordic Concrete Research Open Construction & Building Technology Journal PARC Pesquisa em Arquitetura e Construção Proceedings of the Institution of Civil Engineers - Forensic Engineering Proceedings of the Institution of Civil Engineers - Urban Design and Planning       (Followers: 11) Revista ALCONPAT Revista de la Construcción Revista de Urbanismo       (Followers: 2) Revista Hábitat Sustenable Revista Ingenieria de Construcción       (Followers: 1) Revista INVI RILEM Technical Letters Room One Thousand Ruang-Space: Jurnal Lingkungan Binaan (Journal of The Built Environment) Russian Journal of Construction Science and Technology Science and Technology for the Built Environment Smart and Sustainable Built Environment       (Followers: 8) Steel Construction - Design and Research       (Followers: 3) Stroitel’stvo : Nauka i Obrazovanie Structural Concrete       (Followers: 4) Structural Mechanics of Engineering Constructions and Buildings       (Followers: 2) Sustainable Buildings       (Followers: 3) Sustainable Cities and Society       (Followers: 22) Technology|Architecture + Design Terrain.org : A Journal of the Built & Natural Environments       (Followers: 3) The Historic Environment : Policy & Practice       (Followers: 4) The IES Journal Part A: Civil & Structural Engineering       (Followers: 5) The Journal of Integrated Security and Safety Science (JISSS)       (Followers: 2) Tidsskrift for boligforskning Similar Journals Glass Structures & EngineeringNumber of Followers: 1      Hybrid journal (It can contain Open Access articles) ISSN (Print) 2363-5142 - ISSN (Online) 2363-5150 Published by Springer-Verlag  [2469 journals] • Pseudo static experimental study on spider-supported glass curtain walls Abstract: Abstract Glass curtain walls (GCWs) are among the most commonly used building envelope components in modern buildings, and they can be divided into point-supported or frame-supported GCWs. Much of the current literature on the seismic performance of GCWs has focused on frame-supported GCWs, resulting in a dearth of data about point-supported GCWs. In this work, a pseudo-static experimental study was performed on spider-supported GCWs, a common type of point-supported GCW, with three different types of glass, namely monolithic, laminated, and insulating glass. The damage characteristics and seismic performance of each type of glass and the spiders were studied. For the tested 1.2 m by 1.2 m panels, the ultimate approximate inter-story drift ratios for monolithic, laminated and insulated glass panels were: 1/23, 1/28, and 1/24 rad, respectively. The integrality of the glass panels was believed to be the main reason for different behaviors of the tested specimens. Both monolithic and insulating glass specimens cracked abruptly and ejected numerous fragments onto the ground. The fragments of laminated glass panels would stay attached to the polymer film even after failure, preventing the ejection of fragments. The spiders were found severely deformed when the glass panels failed. More works are necessary to further systematically test and analyze their mechanical performance with more specimens under different height-to-width ratios, interlayer materials, glass thickness, and environmental temperatures. PubDate: 2022-04-29 • High strain rate characterisation of soda-lime-silica glass and the effect of residual stresses Abstract: Abstract A ring-on-ring test configuration for the equibiaxial flexural testing of flat samples was integrated into a novel modified split-Hopkinson pressure bar (SHPB) setup. The established modifications enabled high-speed cameras for fracture assessment and non-contact optical deflection measurements using stereo digital image correlation (stereo-DIC). In the present paper, this setup was utilised to characterise the flexural surface strength and stiffness (Young’s modulus) of circular, as-received soda-lime-silica glass samples at high strain rates. The effect of residual stresses was also studied by including thermally tempered glass samples divided into four residual stress groups. Despite the frequent application of glass products in the built environment, often post-processed into tempered or laminated glass, these investigations are still rare and thus highly demanded when designing for extreme events such as extreme weather, ballistic impacts, or blast loads. A total of 315 samples were tested at a quasi-static and a dynamic loading rate ranging from 2.0 to $$4.3\cdot 10^6\,\hbox {MPa}\,\hbox {s}^{-1}$$ . It was found that the flexural strength of the glass across residual stress groups was strongly dependent on the applied dynamic loading rate, while the residual stresses themselves showed no significant effect on the loading rate dependence. At the dynamic loading, the strength increased between 60 and 86%. Within the two tested loading rates, strength increased expectedly with compressive surface stress. From the stereo-DIC deflection measurements, no change in Young’s modulus with loading rate was observed. PubDate: 2022-04-27 • Celebrating the international year of glass PubDate: 2022-04-22 • Sustainable concrete for circular economy: a review on use of waste glass Abstract: As a result of socio-economic growth, major increase in solid waste generation is taking place which can lead to resource depletion and environmental concerns. To address this inefficient cycle of make, use and dispose, the concept of circular economy has recently been proposed that de-linearizes the current relationship between economic growth, environmental degradation and resource consumption thorough its 6Rs (Reuse, Recycle, Redesign, Remanufacture, Reduce, Recover). In the construction sector, currently the production of binding agents and transportation of virgin aggregates is associated with considerable environmental pollution. As a result, major attempts are taking place to substitute such ingredients with more sustainable and potentially cheaper materials. With waste glass having a production of roughly 100 million tons annually, and its low recycling rate of 26%, there is a growing number of studies unlocking its potential as an eco-friendly substitute for Portland cement (with particle size of below $$100\ \upmu \hbox {m}$$ ) or fine aggregate (with size of below 4.75 mm) in concrete. As a result, this article intends to review the connection of construction sector and circular economy with recycled glass in its center. Accordingly, by partially replacing cement or aggregate with recycled glass, on average, up to 19% greenhouse gas, and 17% energy consumption reduction as well as major cost savings can be made. Additionally, in technical concrete terms, better fresh properties and fire resistance, as well as lower permeability, and in fine grades, favorable cementitious properties are reported as major benefits of using waste glass as a sustainable construction material. Graphical abstract PubDate: 2022-04-01 • Engineered calculation of the uneven in-plane temperatures in Insulating Glass Units for structural design Abstract: Abstract Insulated Glazed Units (IGUs) are composite elements formed by two or more glass panes held together by structural edge seals, entrapping a gas for thermal and acoustic insulation. There is a wealth of evidence that they are prone to cracking due to an uneven temperature distribution within each glass pane, caused in particular by solar radiation, enhanced by the shielding of contour-frame and cast shadows. The accurate assessment of the temperature field in the glass is the input datum to calculate the thermal strain and the consequent state of stress. An engineered method for calculating the temperature field in the glass panes of multiple IGUs, of arbitrary composition, is developed here. This method considers the various sources of thermal exchange: conduction between the exposed, shadowed and border regions of each pane, convection and radiation with the surrounding environment and between the panes, heat storage. Proper coefficients are defined for multiple reflective phenomena of radiant energy between the various glass layers. The proposed model is applied to explanatory case studies, accounting for the daily variations of the external temperature and solar radiation. PubDate: 2022-03-19 • The influence of fracture pattern on the residual resistance of laminated glass at high strain-rates: an experimental investigation of the post-fracture bending moment capacity based on time-temperature mapping of interlayer yield stress PubDate: 2022-03-14 • Cantilevered laminated glass balustrades: the Conjugate Beam Effective Thickness method—part II: comparison and application Abstract: Abstract Cantilevered laminated glass installed in continuous U-profile base shoes are regularly constructed as structural glass guards, parapets, and windscreens. The structural performance of laminated glass is strongly dependent on the shear coupling offered by the interlayer, between the bounding layered and monolithic limits of the glass lites. The most common simplified design approach consists of defining the effective thickness, i.e., the thickness of a monolithic section with equivalent properties. However, established effective thickness methods lack correlation with stress and deflection observed in numerical models simulating bearing support of cantilevered laminated glass in an ordinary U-profile. The analytical Conjugate Beam Effective Thickness (CBET) method proposed in Part I of the present work accounts for the influence of different boundary and loading conditions, and is readily applied to evaluate the flexural performance of two-ply cantilevered laminated glass beams. In this paper, results evaluated with the proposed CBET method are compared with existing analytical methods and numerical results for case study examples, demonstrating improved accuracy with respect to existing effective thickness methods for cantilevered laminated glass beams. The obtained closed-form formulas for evaluation of deflection- and stress-effective thickness are summarized in tables to facilitate the practical application of the CBET method in the design practice. PubDate: 2022-02-10 DOI: 10.1007/s40940-021-00165-7 • Experimental and numerical characterization of twisting response of thin glass Abstract: Abstract The use of new generation thin, lightweight and damage-resistant glass, originally conceived for electronic displays, is moving its first steps in the built environment, in particular for adaptive and movable skins and façades. Its experimental characterization represents pearhaps one of the main open problems in glass research and engineering. Indeed, standard methods to test the glass strength cannot be used, due to geometrical nonlinearities, thwarting the correct procedure and the strenght calculation. Here, an innovative test procedure is proposed, where a rectangular thin glass element is twisted with high distortion level, while rigid elements constrain two opposite plate edges to remain straight. A dedicated experimental apparatus, that can be used to test specimens with different size and thickness, has been designed and used to test, up to rupture, chemically tempered thin glass with thickness of 1.1 mm and 2.1 mm. Experimental results have been compared to those of numerical analyses, with particular regard to the influence of different constrain conditions on the plate response. PubDate: 2022-02-09 DOI: 10.1007/s40940-022-00166-0 • SoundLab AI-Machine learning for sound insulation value predictions of various glass assemblies Abstract: Abstract Modern architecture promotes a high demand for transparent building envelopes and especially glass facades. Commonly, facades are designed to fulfill a multitude of objectives such as superior aesthetic appearance, a higher degree of weathering reliability, quick installation, high transparency as well as economic and ecologic efficiency. For such glazing applications, often an assessment of acoustic properties and especially sound insulation abilities are required. Because of the complexity of such an experimental or computational investigation given the framing systems and glass unit compositions, a reliable and fairly accurate estimation of sound insulation properties of such systems becomes time-consuming and demanding. This paper provides a Machine Learning (ML) based estimation tool of acoustic properties (weighted sound insulation value $$R_W$$ , STC and OITC) of different glazing set-ups. A sufficiently rich database was used to train several machine learning algorithms. The acoustic properties are determined by comparing the third-octave or octave band spectrum of the sound reduction index with a reference curve (typical curve for solid construction elements) specified in the standard DIN EN ISO 717-1. Sound insulation values can currently only be determined by complex and expensive experimental investigations or numerical simulations for certain glass set-ups. Hence, there is no efficient tool for convenient and reliable estimation of the sound insulation performance of glazing systems available at the moment. To this end, the engineering team led by the authors conducted extensive studies on various glazings consisting of different glass assemblies with varying glass, cavity and interlayer thicknesses and different types of interlayer and gas fillings. Based on our research outcomes, a comprehensive web-based prediction program, the so-called AI Tool, has been developed recently. This program can provide a quick analysis and accurate prediction of arbitrary glazing set-ups, interlayers and glazing infills. A series of laboratory tests were conducted to validate the predictions by the AI Tool. The goal of this program is to provide designers, engineers, and architects an effective and economically efficient tool to facilitate the design w.r.t. acoustical properties. PubDate: 2022-02-01 DOI: 10.1007/s40940-022-00167-z • Structural performance of a novel liquid-laminated embedded connection for glass Abstract: Abstract Connections between load-bearing glass components play a major role in terms of the structural integrity and aesthetics of glass applications. Recently, a new type of adhesive connection, known as embedded laminated glass connections, has been developed where a metallic insert is embedded within a laminated glass unit by means of transparent polymeric foil interlayers and assembled through an autoclave lamination process. In this study, a novel variant of this connection, consisting of a thin steel insert encapsulated by a transparent cold-poured resin, is proposed and examined. In particular, the axial tensile mechanical response of this connection is assessed via numerical (FE) analyses and destructive pull-out tests performed on physical prototypes at different displacement rates in order to assess the effect of the strain rate-dependent behaviour of the resin interlayer. It was found that the pull-out stiffness, the maximum load-bearing capacity and the failure mode of the connection are significantly affected by the imposed displacement rate. The numerical (FE) analysis of the pull-out tests, performed in Abaqus, showed that the complex state of stress in the vicinity of the connection is the result of two load-transfer mechanisms and that the relative contribution of these mechanisms depends on the insert geometry and the relative stiffnesses of the constituent materials. Overall, it is concluded that the prototypes are promising in terms of manufacturability, aesthetics and structural performance and thus the novel variant connection considered in this study offers a promising alternative to existing load-bearing connections for laminated glass structures, but further investigations are required to ascertain its suitability for real-world applications. PubDate: 2021-12-21 DOI: 10.1007/s40940-021-00162-w • A connected glass community PubDate: 2021-12-17 DOI: 10.1007/s40940-021-00164-8 • Enhanced engineered calculation of the temperature distribution in architectural glazing exposed to solar radiation Abstract: Abstract The precise assessment of the temperature distribution on glass panes, whether they are single windows or façade components, is of paramount importance for the safety and durability of building skins, because many experienced breakages are due to the thermal stress resulting from solar radiation. Here, an enhanced engineered method for the calculation of the temperature field in the panel is presented, which takes into account the different heat exchange phenomena that influence the temperature distribution. The possible presence of shadows and of a contouring frame is considered by dividing the panel in regions considered to be thermally homogenous and, for each region, the time-dependent temperature is evaluated by establishing a transient energy balance. The proposed model is compared, both from a qualitative and a quantitave point of view, with the formulations of current Standards and design rules. Paradigmatic case studies are considered, taking into account the daily and seasonal variations of the external temperature and solar radiation. The effects of the size and shape of the shaded regions are also investigated in a parametric analysis. Once the temperature distribution is known, the stress state in the glass can be readily calculated with most commercial FEM codes. PubDate: 2021-12-02 DOI: 10.1007/s40940-021-00163-9 • A durable coating to prevent stress corrosion effects on the surface strength of annealed glass Abstract: Abstract The durability of an innovative polymeric coating recently developed by the authors to prevent stress corrosion in annealed glass is herein examined. The coating, having functional graded properties through the thickness, is optimised to provide a very good adhesion with glass and an excellent hydrophobic behavior on the side exposed to the environment, thus creating a good barrier to humidity, which is the triggering agent for stress corrosion. Three scenarios are analysed in terms of ageing: (i) cyclic loading, accomplished by subjecting coated samples to repetitive loading; (ii) natural weathering, performed by exposing coated samples to atmospheric agents; (iii) artificial weathering, carried out by exposing coated specimens to fluorescent UV lamps, heat and water. The durability of the coating is assessed indirectly, on the base of its residual effectiveness in preventing stress corrosion, by comparing the bending strength, obtained with the coaxial double ring test, of aged coated glass specimens with that of un-coated and freshly coated specimens. The obtained results prove that the proposed formulation is almost insensitive to cyclic loading, maintains a very good performance in case of natural weathering, whereas is slightly more sensitive to artificial weathering. PubDate: 2021-09-21 DOI: 10.1007/s40940-021-00161-x • Experimental study and comparison of different fully transparent laminated glass beam designs Abstract: Abstract Laminated glass beams without metallic or polymeric reinforcements generally lack post-breakage strength and ductility. This paper aims to perform a comparative study by testing five different fully transparent laminated glass beam designs in order to see how parameters such as the number and thickness of glass sheets (3 x 10 mm or 5 x 6 mm), the interlayer material (PVB Clear or SentryGlas), and the thermal treatment of glass (annealed or heat-strengthened) affect the pre-breakage performance and post-breakage safety. A buckling analysis is also performed using a numerical model with ABAQUS CAE. The study includes a comparison between the results of different experimental mechanical tests on laminated glass beams, including the tests presented in this paper, as well as other tests found in the literature. All designs presented a linear elastic behaviour until initial breakage. The interlayer material mainly affected the crack shape of laminated glass beams. Beams with five sheets of annealed glass had a more progressive breakage, and therefore a safer behaviour, than beams with three sheets of annealed or heat-strengthened glass. PubDate: 2021-09-16 DOI: 10.1007/s40940-021-00160-y • Topical issue “Projects and case studies” PubDate: 2021-09-01 DOI: 10.1007/s40940-021-00159-5 • Repositioning Messeturm–Maximum Transparency Abstract: Abstract The Messeturm (“Trade Fair Tower”) in Frankfurt is currently adjusted to the requirements of a modern office building. The lobby area has been enlarged by a highly transparent façade consisting of oversized insulating glass (IG) units. The IG units are curved and have a size of up to 17 m  $$\times$$  2.8 m. To obtain a perfectly curved façade, the IG units were fabricated with laminated cold-bended glass panes. Horizontally the IG units are supported by tapered stainless steel fins. Inclined steel beams connect the top of the façade fins with the tower structure creating a rounded roof. The vertical façade with its glass roof allows maximum transparency for people inside the lobby as well as for people passing by outside to the nearby trade fair grounds. The glass was fabricated by sedak. The façade was installed by seele on behalf of the owner OFFICEFIRST, the asset-management of Blackstone. PubDate: 2021-09-01 DOI: 10.1007/s40940-020-00140-8 • Conceptual design and FEM structural response of a suspended glass sphere made of reinforced curved polygonal panels Abstract: Abstract The paper introduces a novel concept for structural glass shells that is based on the mechanical coupling of double curved heat-bent glass panels and a wire frame mesh, which constitutes a grid of unbonded edge-reinforcement. Additionally, this grid has the purpose of providing redundancy. The panels have load-bearing function, they are clamped at the vertices and dry-assembled. The main novelty lies in the use of polygonal curved panels with a nodal force transfer mechanism. This concept has been validated on an illustrative design case of a 6 m-diameter suspended glass sphere, in which regular pentagonal and hexagonal spherical panels are employed. The good strength and stiffness achieved for this structure is demonstrated by means of local and global FE models. Another fundamental feature of the concept is that the reinforcement grid provides residual strength in the extreme scenarios in which all panels are completely failed. A quantitative measure of redundancy is obtained by comparing this scenario with the ULS. PubDate: 2021-09-01 DOI: 10.1007/s40940-020-00130-w • The skypool: bringing architectural imagination to life Abstract: Abstract The Sky Pool in Nine Elms, London, is the world’s first fully transparent, suspended swimming pool—allowing residents to swim 15 m between two buildings 10 floors up—and will become a landmark and an unprecedented feat of architecture for the capital. The Sky Pool was conceived as a bold, innovative and thrilling unique selling point for Embassy Gardens, one of the leading riverside development in zone 1 central London that provides 1500 new homes, world-class amenities, 40,160 $$\mathrm{m}^2$$ of office space across two buildings and 12,100 $$\mathrm{m}^2$$ of retail spaces and cafes, bars and restaurants. Phase 1 and Phase 3 of Embassy Gardens was delivered by Ballymore, and phase two by a Joint Venture between Ballymore and EcoWorld. Wrapped around the new U.S. Embassy, the 8ha riverside neighbourhood has a prominent location in one of Europe’s most significant regeneration projects that covers the 227 ha Greater London Authority’s Vauxhall Nine Elms Battersea investment opportunity area, Nine Elms London (Development sites, 2021) bringing in 20,000 new homes (Fig. 1). The original concept from the architect, Arup Associates, was to use glass for its construction, however, initial studies showed that the structural glass was not the most efficient material choice. The shortcomings on strength were particularly exacerbated by the requirement of joining glass panels together which provided areas of increased stress. Furthermore, the possibility of damage and expensive replacement brought the idea of using PMMA (polymethyl methacrylate), commonly referred to as Acrylic for the project. The use of casting PMMA in such large sizes presented significant construction challenges, including having to build an entirely new building to fabricate the structure, conceive of new ways to fabricate and bond the panels and to enhance already tight quality control. The design included the structural considerations of the pool being supported by two independent buildings which could sway and settle independently and to control and adapt to the issues of differential thermal expansion. PubDate: 2021-09-01 DOI: 10.1007/s40940-021-00158-6 • Cantilevered laminated glass balustrades: the Conjugate Beam Effective Thickness method—part I: the analytical model Abstract: Abstract The proper assessment of shear coupling is necessary for the evaluation of laminated glass performance between the bounding layered and monolithic limits. The most common simplified design approach consists in defining the effective thickness, i.e., the thickness of a monolithic section with equivalent flexural properties. Cantilevered laminated glass balustrades are common applications of structural glass. However, the use of existing effective thickness methods presents strong limitations for their design. Here, the conjugate beam effective thickness (CBET) method is presented, based on the conjugate beam analogy recently proposed to evaluate the response of inflected laminates formed by external elastic beams bonded by an adhesive ply. The conjugate beam analogy, applied to laminated glass beams, allows accurate evaluation of the shear stress transmitted by the interlayer, based on the response of a monolithic conjugate beam, with the option to constrain relative sliding of plies at a beam end. Once the shear coupling is known, the effective thickness may be evaluated with the proposed CBET model by comparing the maximum stress and deflection of the laminated beam with a monolithic Euler–Bernoulli beam. The CBET method’s formulas can be readily applied to evaluate the maximum stress and cantilever free-end deflection for different load and boundary conditions, representative of cantilevered laminated glass balustrade supported in a U-profile. PubDate: 2021-08-19 DOI: 10.1007/s40940-021-00156-8 Abstract: Abstract Experimental strength tests are performed on two series of nominally equal plate specimens of annealed soda-lime glass subjected to either ring-on-ring or ball-on-ring bending. The Weibull effective area which represents a fictitious surface area exposed to uniform tension is calculated using closed-form solutions. Finite-size weakest-link systems are implemented numerically in a computationally intensive procedure for random sampling of plates extracted from a virtual jumbo pane whose surface area contains a set of stochastic Griffith flaws. A non-linear finite element analysis is conducted to compute the bending stresses. The glass surface condition is represented in different flaw-size concepts that depend on a truncated exponentially decaying flaw-size distribution. Stress corrosion effects are modelled by implementation of subcritical crack growth. The effective ball contacting radius is determined in a numerical computation. The results show that surface size effects in glass are not only a matter of strength-scaling, as also the shape of the distribution changes. While the lowest strength value, as per the major in-plane principal stress at the recorded fracture origin, in the respective data sets is very similar, the strongest specimen observed in ball-on-ring testing is over 70% stronger than the correspondingly strongest specimen observed in ring-on-ring bending. The Shift function is used to make visual comparisons of the difference in quantiles in the observed data sets. Use of an ordinary Weibull distribution leads to non-conservative strength predictions on smaller effective areas, and to too low strength predictions than are viable for glass design on larger areas. The numerical implementation of finite-size weakest-link systems can produce better predictions for the strength-scaling compared to a Weibull distribution, in particular when the flaw-size concept is modified to include a doubly stochastic flaw-size distribution or a random noise added to each subdivided region of the discretized surface area. The simulated ball-on-ring fracture origins exhibit greater spread from the centre point than otherwise observed in laboratory tests. It is indicated that the chosen representation of surface condition may not be accurate enough for the modelling of all fracture origins in the ball-on-ring setup even though acceptable results are obtained with the ring-on-ring model. There is a need for more insight into the surface condition of glass which can be conducive to the development of flaw-size based weakest-link modelling. PubDate: 2021-08-03 DOI: 10.1007/s40940-021-00157-7 JournalTOCs School of Mathematical and Computer Sciences Heriot-Watt University Edinburgh, EH14 4AS, UK Email: journaltocs@hw.ac.uk Tel: +00 44 (0)131 4513762
2022-05-17 23:43:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49402153491973877, "perplexity": 6272.423300007645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662520936.24/warc/CC-MAIN-20220517225809-20220518015809-00681.warc.gz"}
http://cprover.diffblue.com/background-concepts.html
cprover Background Concepts The purpose of this section is to explain several key concepts used throughout the CPROVER framework at a high level, ignoring the details of the actual implementation. In particular, we will discuss different ways to represent programs in memory, three important analysis methods and some commonly used terms. Representations One of the first questions we should be considering is how we represent programs in such a way that we can easily analyze and reason about them. As it turns out, the best way to do this is to use a variety of different representations, each representing a different level of abstraction. These representations are designed in such a way that for each analysis we want to perform, there is an appropriate representation, and it is easy to go from representations that are close to the source code to representations that focus on specific semantic aspects of the program. The representations that the CPROVER framework uses mirror those used in modern compilers such as LLVM and gcc. I will point out those places where the CPROVER framework does things differently, attempting to give rationales wherever possible. One in-depth resource for most of this section is the classic compiler construction text book ''Compilers: Principles, Techniques and Tools'' by Aho, Lam, Sethi and Ullman. To illustrate the different concepts, we will consider a small example program. While the program is in C, the general ideas apply to other languages as well - see later sections of this manual to understand how the specific features of those languages are handled. Our running example will be a program that calculates factorials. /* factorial.c * * For simplicity's sake, we just give the forward * declarations of atoi and printf. */ int atoi(const char *); int printf(const char *, ...); unsigned long factorial(unsigned n) { unsigned long fac = 1; for (unsigned int i = 1; i <= n; i++) { fac *= i; } return fac; } /* Error handling elided - this is just for illustration. */ int main(int argc, const char **argv) { unsigned n = atoi(argv[1]); printf("%u! = %lu\n", n, factorial(n)); return 0; } The question of this first section is: how do we represent this program in memory so that we can do something useful with it? One possibility would be to just store the program as a string, but this is clearly impractical: even finding whether there is an assignment to a specific variable would require significant parsing effort. For this reason, the first step is to parse the program text into a more abstract representation. AST The first step in representing a program in memory is to parse the program, at the same time checking for syntax errors, and store the parsing result in memory. The key data structure that stores the result of this step is known as an Abstract Syntax Tree, or AST for short (cf. Wikipedia). ASTs are still relatively close to the source code, and represent the structure of the source code while abstracting away from syntactic details, e.g., dropping parentheses, semicolons and braces as long as those are only used for grouping. Considering the example of the C program given above, we first notice that the program describes (in C terms) a single translation unit, consisting of four top-level declarations (the two function forward declarations of atoi and printf, and the function definitions of factorial and main). Let us start considering the specification of atoi. This gives rise to a subtree modeling that we have a function called atoi whose return type is int, with an unnamed argument of type const char *. We can represent this using a tree that has, for instance, the following structure (this is a simplified version of the tree that the CPROVER framework uses internally): AST for the atoi declaration This graph shows the (simplified) AST structure for the atoi function. The top level says that this is a global entity, namely one that has code (i.e., a function), called atoi and yielding int. Furthermore, it has a child node initiating a parameter list, and there is a node in the parameter list for each parameter, giving its type and name, if a name is given. Extending this idea, we can represent the structure of the factorial function using ASTs. The idea here is that the code itself has a hierarchical structure. In the case of C, this starts with the block structure of the code: at the top, we start with a block of code, having three children, each being a ''statement'' node: 1. unsigned long fac = 1 2. for (unsigned int i = 1; i <= n; i++) { fac *= i } 3. return fac The first statement is already a basic statement: we represent it as a local declaration (similar to the global declarations above) of a variable. AST for unsigned long fac = 1 The second statement is a compound statement, which we can decompose further. At the top level, we have a node stating that this is a for statement, with four child nodes: 1. Another declaration node, declaring variable i. 2. An expression node, with operator <= and two children giving the LHS as variable i and the RHS as variable n. 3. An expression node with post-fix operator ++ and a child giving the variable i as argument. 4. A block node, starting a new code block. This node has one child: 1. An expression node with top-level operator *= and two child nodes giving the LHS as variable fac and the RHS as variable i. All in all, the AST for this piece of code looks like this: AST for the for loop Finally, the third statement is again simple: it consists of a return statement node, with a child node for the variable expression fac. Since the AST is very similar to the first AST above, we omit it here. All in all, the AST for the function body looks like this: AST for the body of factorial Using the AST for the function body, we can easily produce the definition of the factorial function: AST for factorial (full definition) In the end, we produce a sequence of trees modeling each declaration in the translation unit (i.e., the file factorial.c). This data structure is already useful: at this level, we can easily derive simple information such as ''which functions are being defined?'', ''what are the arguments to a given function'' and so on. Symbol tables and variable disambiguation Nevertheless, for many analyses, this representation needs to be transformed further. In general, the first step is to resolve variable names. This is done using an auxiliary data structure known as the symbol table. The issue that this step addresses is that the meaning of a variable name depends on a given scope - for instance, in a C program, we can define the variable i as a local variable in two different functions, so that the name i refers to two different objects, depending on which function is being executed at a given point in time. To resolve this problem, a first transformation step is performed, changing (short) variable names to some form of unique identifier. This ensures that each variable identifier is used only once, and all accesses to a given variable can be easily found by just looking for the variable identifier. For instance, we could change each variable name into a pair of the name and a serial number. The serial number gets increased whenever a variable of that name is declared. In the example, the ASTs for factorial and main after resolving variable names would look roughly like this: ASTs with resolved variables Note that the parameter n in factorial and the local variable n in main are now disambiguated as n.1 and n.2; furthermore, we leave the names of global objects as-is. In the CPROVER framework, a more elaborate system is used: local variables are prefixed with the function names, and further disambiguation is performed by adding indices. For brevity, we use indices only. Further information on ASTs can be found in the Wikipedia page and the materials linked there. Additionally, there is an interesting discussion on StackOverflow about abstract versus concrete syntax trees. At this level, we can already perform a number of interesting analyses, such as basic type checking. But for more advanced analyses, other representations are better: the AST contains many different kinds of nodes (essentially, one per type of program construct), and has a rather intricate structure. Intermediate Representations (IR) The ASTs in the previous section represent the syntax of a program, including all the features of a given programming language. But in practice, most programming languages have a large amount of ''syntactic sugar'': constructs that are, technically speaking, redundant, but make programming a lot easier. For analysis, this means that if we immediately try to work on the initial AST, we would have to handle all these various cases. To simplify analysis, it pays off to bring the AST into simpler forms, known as intermediate representations (short IR). An IR is usually given as some form of AST, using a more restricted subset or a variant of the original language that is easier to analyze than the original version. Taking the example from above, we rewrite the program into a simpler form of the C language: instead of allowing powerful control constructs such as while and for, we reduce everything to if and goto. In fact, we even restrict if statements: an if statement should always be of the form if (*condition*) goto *target* else goto *target*;. As it turns out, this is sufficient to represent every C program. The factorial function in our example program can then be rewritten as follows: unsigned long factorial(unsigned n) { unsigned long fac = 1; // Replace the for loop with if and goto unsigned int i = 1; for_loop_start: if (i <= n) goto for_loop_entry else goto for_loop_end; for_loop_entry: fac *= i; i++; goto for_loop_start; for_loop_end: return fac; } We leave it up to the reader to verify that both versions of the function behave the same way, and to draw the function as an AST. In the CPROVER framework, a number of different IRs are employed to simplify the program under analysis into a simple core language step-by-step. In particular, expressions are brought into much simpler forms. This sequence of transformations is described in later chapters. Control Flow Graphs (CFG) Another important representation of a program can be gained by transforming the program structure into a control flow graph, short CFG. While the AST focuses more on the syntactic structure of the program, keeping constructs like while loops and similar forms of structured control flow, the CFG uses a unified graph representation for control flow. In general, for analyses based around Abstract Interpretation (see Abstract Interpretation), it is usually preferable to use a CFG representation, while other analyses, such as variable scope detection, may be easier to perform on ASTs. The general idea is to present the program as a graph. The nodes of the graph are instructions or sequences of instructions. In general, the nodes are basic blocks: a basic block is a sequence of statements that is always executed in order from beginning to end. The edges of the graph describe how the program execution may move from one basic block to the next. Note that single statements are always basic blocks; this is the representation used inside the CPROVER framework. In the examples below, we try to use maximal basic blocks (i.e., basic blocks that are as large as possible); this can be advantageous for some analyses. Let us consider the factorial function as an example. As a reminder, here is the code, in IR: unsigned long fac = 1; unsigned int i = 1; for_loop_start: if (i <= n) goto for_loop_entry else goto for_loop_end; for_loop_entry: fac *= i; i++; goto for_loop_start; for_loop_end: return fac; We rewrite the code with disambiguated variables (building the AST from it is left as an exercise): unsigned long fac.1 = 1; unsigned int i.1 = 1; for_loop_start: if (i.1 <= n.1) goto for_loop_entry else goto for_loop_end; for_loop_entry: fac.1 *= i.1; i.1++; goto for_loop_start; for_loop_end: return fac.1; This function consists of four basic blocks: 1. unsigned long fac.1 = 1; unsigned int i.1 = 1; 2. if (i.1 <= n.1) goto for_loop_entry else goto for_loop_end (this block has a label, for_loop_start). 3. fac.1 *= i.1 i.1 ++ goto for_loop_start (this block has a label, for_loop_entry). 4. return fac.1 (this block has a label, for_loop_end). One way to understand which functions form basic blocks is to consider the successors of each instruction. If we have two instructions A and B, we say that B is a successor of A if, after executing A, we can execute B without any intervening instructions. For instance, in the example above, the loop initialization statement unsigned long int i.1 = 1 is a successor of unsigned long fac.1 = 1. On the other hand, return fac.1 is not a successor of unsigned long fac.1 = 1: we always have to execute some other intermediate statements to reach the return statement. Now, consider the if statement, if (i.1 <= n.1) goto for_loop_entry else goto for_loop_end. This statement has two successors: fac.1 *= i.1 and return fac.1. Similarly, we say that A is a predecessor of B if B is a successor of A. We find that the if statement has two predecessors, unsigned int i.1 = 1 and goto for_loop_start. A basic block is a sequence of instructions with the following property: 1. If B comes directly after A in the sequence, B must be the sole successor of A. 2. If A comes directly before B in the sequence, A must be the sole predecessor of B. In particular, each member of the sequence but the first must have exactly one predecessor, and each member of the sequence but the last must have exactly one successor. These criteria explain why we have the basic blocks described above. Putting everything together, we get a control flow graph like this: Control flow graph for factorial The graph can be read as follows: each node corresponds to a basic block. The initial basic block (where the function is entered) is marked with a double border, while those basic blocks that leave the function have a gray background. An edge from a basic block B to a basic block B', means that if the execution reaches the end of B, execution may continue in B'. Some edges are labeled: the edge leaving the comparison basic block with the label true, for instance, can only be taken if the comparison did, in fact, return true. Note that this representation makes it very easy to interpret the program, keeping just two pieces of state: the current position (which basic block and which line), and the values of all variables (in real software, this would also include parts of the heap and the call stack). Execution proceeds as follows: as long as there are still instructions to execute in the current basic block, run the next instruction and move the current position to right after that instruction. At the end of a basic block, take one of the available outgoing edges to another basic block. In the CPROVER framework, we often do not construct CFGs explicitly, but instead use an IR that is constructed in a way very similar to CFGs, known as ''GOTO programs''. This IR is used to implement a number of static analyses, as described in sections Frameworks: and Specific analyses:. SSA While control flow graphs are already quite useful for static analysis, some techniques benefit from a further transformation to a representation know as static single assignment, short SSA. The point of this step is to ensure that we can talk about the entire history of assignments to a given variable. This is achieved by renaming variables again: whenever we assign to a variable, we clone this variable by giving it a new name. This ensures that each variable appearing in the resulting program is written to exactly once (but it may be read arbitrarily many times). In this way, we can refer to earlier values of the variable by just referencing the name of an older incarnation of the variable. We illustrate this transformation by first showing how the body of the for loop of factorial is transformed. We currently have: expression: fac.1 *= i.1 expression: i.1 ++ We now give a second number to each variable, counting the number of assignments so far. Thus, the SSA version of this code turns out to be expression: fac.1.2 = fac1.1 * i.1.1 expression: i.1.2 = i.1.1 + 1 This representation now allows us to state facts such as ''i is increasing'' by writing i.1.1 < i.1.2. At this point, we run into a complication. Consider the following piece of code for illustration: // Given some integers a, b int x = a; if (a < b) x = b; return x; The corresponding control flow graph looks like this: CFG for maximum function When we try to transform to SSA, we get: CFG for maximum function - SSA, attempt Depending on which path the execution takes, we have to return either x.1 or x.2! The way to make this work is to introduce a function Φ that selects the right instance of the variable; in the example, we would have CFG for maximum function - SSA using Φ In the CPROVER framework, we provide a precise implementation of Φ, using explicitly tracked information about which branches were taken by the program. There are also some differences in how loops are handled (finite unrolling in CPROVER, versus a Φ-based approach in compilers); this approach will be discussed in a later chapter. For the time being, let us come back to factorial. We can now give an SSA using Φ functions: Control flow graph in SSA for factorial The details of SSA construction, plus some discussion of how it is used in compilers, can be found in the original paper. The SSA is an extremely helpful representation when one wishes to perform model checking on the program (see next section), since it is much easier to extract the logic formulas used in this technique from an SSA compared to a CFG (or, even worse, an AST). That being said, the CPROVER framework takes a different route, opting to convert to intermediate representation known as GOTO programs instead. Analysis techniques Bounded model checking One of the most important analysis techniques by the CPROVER framework, implemented using the CBMC (and JBMC) tools, is bounded model checking, a specific instance of a method known as Model Checking. The basic question that model checking tries to answer is: given some system (in our case, a program) and some property, can we find an execution of the system such that it reaches a state where the property holds? If yes, we would like to know how the program reaches this state - at the very least, we want to see what inputs are required, but in general, we would prefer having a trace, which shows what statements are executed and in which order. In general, a trace describes which statements of the program were executed, and which intermediate states were reached. Often, it is sufficient to only provide part of the intermediate states (omitting some entirely, and only mentioning parts that cannot be easily reconstructed in others). As it turns out, model checking for programs is, in general, a hard problem. Part of the reason for this is that many model checking algorithms strive for a form of ''completeness'' where they either find a trace or return a proof that such a trace cannot possibly exist. Since we are interested in generating test cases, we prefer a different approach: it may be that a certain target state is reachable only after a very long execution, or not at all, but this information does not help us in constructing test cases. For this reason, we introduce an execution bound that describes how deep we go when analyzing a program. Model checking techniques using such execution bounds are known as bounded model checking; they will return either a trace, or a statement that says ''the target state could not be reached in n steps'', for a given bound n. Thus, for a given bound, we always get an underapproximation of all states that can be reached: we can certainly find those reachable within the given bound, but we may miss states that can be reached only with more steps. Conversely, we will never claim that a state is not reachable within a certain bound if there is, in fact, a way of reaching this state. The bounded model checking techniques used by the CPROVER framework are based on symbolic model checking, a family of model checking techniques that work on sets of program states and use advanced tools such as SAT solvers (more on that below) to calculate the set of reachable states. The key step here is to encode both the program and the set of states using an appropriate logic, mostly propositional logic and (fragments of) first-order logic. In the following, we will quickly discuss propositional logic, in combination with SAT solving, and show how to build a simple bounded model checker for a finite-state program. Actual bounded model checking for software requires a number of additional steps and concepts, which will be introduced as required later on. Propositional logic and SAT solving Many of the concepts in this section can be found in more detail in the Wikipedia article on Propositional logic. Let us start by looking at propositional formulas. A propositional formula consists of propositional variables, say x, y and z, that can take the Boolean values true and false, connected together with logical operators (often called junctors), namely and, or and not. Sometimes, one introduces additional junctors, such as xor or implies, but these can be defined in terms of the three basic junctors just described. Examples of propositional formulas would be ''x and y'' or ''not x or y or z''. We can evaluate formulas by setting each variable to a Boolean value and reducing using the follows rules: • x and false = false and x = false or false = not true = false • x or true = true or x = true and true = not false = true An important related question is: given a propositional formula, is there a variable assignment that makes it evaluate to true? This is known as the SAT problem. The most important things to know about SAT are: 1. It forms the basis for bounded model checking algorithms; 2. It is a very hard problem to solve in general: it is NP-complete, meaning that it is easy to check a solution, but (as far as we know) hard to find one; 3. There has been impressive research in SAT solvers that work well in practice for the kinds of formulas that we encounter in model checking. A commonly-used SAT solver is minisat. 4. SAT solvers use a specific input format for propositional formulas, known as the conjunctive normal form. For details, see the linked Wikipedia page; roughly, a conjunctive normal form formula is a propositional formula with a specific shape: at the lowest level are atoms, which are propositional variables ''x'' and negated propositional variables ''not x''; the next layer above are clauses, which are sequences of atoms connected with ''or'', e.g. ''not x or y or z''. The top layer consists sequences of clauses, connected with ''and''. As an example in how to use a SAT solver, consider the following formula (in conjunctive normal form): ''(x or y) and (x or not y) and x and y'' We can represent this formula in the minisat input format as: p cnf 2 3 1 2 0 1 -2 0 1 0 2 0 Compare the Minisat user guide. Try to run minisat on this example. What would you expect, and what result do you get? Next, try running minisat on the following formula: ''(x or y) and (x or not y) and (not x) and y'' What changed? Why? How bounded model checking works TODO This section needs to be written on the next documentation day. The following content needs to be added: • How does it work? Encoding of state as propositions/first-order propositions, step function and composition of step functions. • Solve question: is there a model for this formula? • Traces Where to go from here The above section gives only a superficial overview on how SAT solving and bounded model checking work. Inside the CPROVER framework, we use a significantly more advanced engine, with numerous optimizations to the basic algorithms presented above. One feature that stands out is that we do not reduce everything to propositional logic, but instead use a more powerful logic, namely quantifier-free first-order logic. The main difference is that instead of propositional variables, we allow expressions that return Boolean values, such as comparisons between numbers or string matching expressions. This gives us a richer logic to express properties. Of course, a simple SAT solver cannot deal with such formulas, which is why we go to SMT solvers instead - these solvers can deal with specific classes of first-order formulas (like the ones we produce). One well-known SMT solver is Z3. Static analysis While BMC analyzes the program by transforming everything to logic formulas and, essentially, running the program on sets of concrete states, another approach to learn about a program is based on the idea of interpreting an abstract version of the program. This is known as abstract interpretation. Abstract interpretation is one of the main methods in the area of static analysis. Abstract Interpretation The key idea is that instead of looking at concrete program states (e.g., ''variable x contains value 1''), we look at some sufficiently-precise abstraction (e.g., ''variable x is odd'', or ''variable x is positive''), and perform interpretation of the program using such abstract values. Coming back to our running example, we wish to prove that the factorial function never returns 0. An abstract interpretation is made up from four ingredients: 1. An abstract domain, which represents the analysis results. 2. A family of transformers, which describe how basic programming language constructs modify the state. 3. A map that takes a pair of a program location (e.g., a position in the program code) and a variable name and yields a value in the abstract domain. 4. An algorithm to compute a ''fixed point'', computing a map as described in the previous step that describes the behavior of the program. The first ingredient we need for abstract interpretation is the abstract domain. The domain allows us to express what we know about a given variable or value at a given program location; in our example, whether it is zero or not. The way we use the abstract domain is for each program point, we have a map from visible variables to elements of the abstract domain, describing what we know about the values of the variables at this point. For instance, consider the factorial example again. After running the first basic block, we know that fac and i both contain 1, so we have a map that associates both fac and i to "not 0". An abstract domain is a set $D$ (or, if you prefer, a data type) with the following properties: • There is a function merge that takes two elements of $D$ and returns an element of $D$. This function is associative (merge(x, merge(y,z)) = merge(merge(x,y), z)), commutative (merge(x,y) = merge(y,x)) and idempotent (merge(x,x) = x). • There is an element bottom of $D$ such that merge(x, bottom) = x. Algebraically speaking, $D$ needs to be a semi-lattice. For our example, we use the following domain: • D contains the elements "bottom" (nothing is known), "equals 0", "not 0" and "could be 0". • merge is defined as follows: merge(bottom, x) = x merge("could be 0", x) = "could be 0" merge(x,x) = x merge("equals 0", "not 0") = "could be 0" • bottom is bottom, obviously. It is easy but tedious to check that all conditions hold. The second ingredient we need are the abstract state transformers. An abstract state transformer describes how a specific expression or statement processes abstract values. For the example, we need to define abstract state transformers for multiplication and addition. Let us start with multiplication, so let us look at the expression x*y. We know that if x or y are 0, x*y is zero. Thus, if x or y are "equals 0", the result of x*y is also "equals 0". If x or y are "could be 0" (but neither is "equals 0"), we simply don't know what the result is - it could be zero or not. Thus, in this case, the result is "could be 0". What if x and y are both "not 0"? In a mathematically ideal world, we would have x*y be non-zero as well. But in a C program, multiplication could overflow, yielding 0! So, to be correct, we have to yield "could be 0" in this case. Finally, when x is bottom, we can just return whatever value we had assigned to y, and vice versa. For addition, the situation looks like this: consider x+y. If neither x nor y is "not 0", but at least one is "could be 0", the result is "could be 0". If both are "equals 0", the result is "equals 0". What if both are "not 0"? It seems that, since the variables are unsigned and not zero, it should be "not 0". Sadly, overflow strikes again, and we have to make do with "could be 0". The bottom cases can be handled just like for multiplication. The way of defining the transformation functions above showcases another important property: they must reflect the actual behavior of the underlying program instructions. There is a formal description of this property, using Galois connections; for the details, it is best to look at the literature. The third ingredient is straightforward: we use a simple map from program locations and variable names to values in the abstract domain. In more complex analyses, more involved forms of maps may be used (e.g., to handle arbitrary procedure calls, or to account for the heap). At this point, we have almost all the ingredients we need to set up an abstract interpretation. To actually analyze a function, we take its CFG and perform a fixpoint algorithm. Concretely, let us consider the CFG for factorial again. This time, we have named the basic blocks, and simplified the variable names. Control flow graph for factorial - named basic blocks We provide an initial variable map: n is "could be 0" before BB1. As a first step, we analyze BB1 and find that the final variable map should be: • n "could be 0". • fac "not 0" (it is, in fact, 1, but our domain does not allow us to express this). • i "not 0". Let as call this state N. At this point, we look at all the outgoing edges of BB1 - we wish to propagate our new results to all blocks that follow BB1. There is only one such block, BB2. We analyze BB2 and find that it doesn't change any variables. Furthermore, the result of i <= n does not allow us to infer anything about the values in the variable map, so we get the same variable map at the end of BB2 as before. Again, we look at the outgoing edges. There are two successor blocks, BB3 and BB4. The information in our variable map does not allow us to rule out either of the branches, so we need to propagate to both blocks. We start with BB3 and remember that we need to visit BB4. At this point, we know that n "could be 0", while fac and i are "not 0". Applying the abstract transformers, we learn that afterwards, both fac and i "could be 0" (fac ends up as "could be 0" since both fac and i were "not 0" initially, i ends up as "could be 0" because both i and 1 are "not 0"). So, the final variable map at the end of BB3 is • n, fac and i "could be 0". Let us call this state S. At this point, we propagate again, this time to BB2. But wait, we have propagated to BB2 before! The way this is handled is as follows: We first calculate what the result of running BB2 from S; this yields S again. Now, we know that at the end of BB2, we can be either in state S or N. To get a single state out of these two, we merge: for each variable, merge the mapping of that variable in S and N. We get: • n maps to merge("could be 0", "could be 0") = "could be 0" • fac maps to merge("could be 0", "not 0") = "could be 0" • i maps to merge("could be 0", "not 0") = "could be 0" In other words, we arrive at S again. At this point, we propagate to BB3 and BB4 again. Running BB3, we again end up with state S at the end of BB3, so things don't change. We detect this situation and figure out that we do not need to propagate from BB3 to BB2 - this would not change anything! Thus, we can now propagate to BB4. The state at the end of BB4 is also S. Now, since we know the variable map at the end of BB4, we can look up the properties of the return value: in S, fac maps to "could be 0", so we could not prove that the function never returns 0. In fact, this is correct: calculating factorial(200) will yield 0, since the 64-bit integer fac overflows. Nevertheless, let us consider what would happen in a mathematically ideal world (e.g., if we used big integers). In that case, we would get "not 0" * "not 0" = "not 0", "not 0" + x = "not 0" and x + "not 0" = "not 0". Running the abstract interpretation with these semantics, we find that if we start BB3 with variable map N, we get variable map N at the end as well, so we end up with variable map N at the end of BB4 - but this means that fac maps to "not 0"! The algorithm sketched above is called a fixpoint algorithm because we keep propagating until we find that applying the transformers does not yield any new results (which, in a mathematically precise way, can be shown to be equivalent to calculating, for a specific function f, an x such that f(x) = x). This overview only describes the most basic way of performing abstract interpretation. For one, it only works on the procedure level (we say it is an intra-procedural analysis); there are various ways of extending it to work across function boundaries, yielding inter-procedural analysis. Additionally, there are situations where it can be helpful make a variable map more abstract (widening) or use information that can be gained from various sources to make it more precise (narrowing); these advanced topics can be found in the literature as well. Glossary Instrument To instrument a piece of code means to modify it by (usually) inserting new fragments of code that, when executed, tell us something useful about the code that has been instrumented. For instance, imagine you are given the following function: int aha (int a) { if (a > 10) return 1; else return -1; } and you want to design an analysis that figures out which lines of code will be covered when aha is executed with the input 5. We can instrument the code so that the function will look like: int aha (int a) { __runtime_line_seen(0); if (a > 10) { __runtime_line_seen(1); return 1; } else { __runtime_line_seen(2); return -1; } __runtime_line_seen(3); } All we have to do now is to implement the function void __runtime_line_seen(int line) so that when executed it logs somewhere what line is being visited. Finally we execute the instrumented version of aha and collect the desired information from the log. More generally speaking, and especially within the CPROVER code base, instrumenting the code often refers to modify its behavior in a manner that makes it easier for a given analysis to do its job, regardless of whether that is achieved by executing the instrumented code or by just analyzing it in some other way. Flattening and Lowering As we have seen above, we often operate on many different representations of programs, such as ASTs, control flow graphs, SSA programs, logical formulas in BMC and so on. Each of these forms is good for certain kinds of analyses, transformations or optimizations. One important kind of step in dealing with program representations is going from one representation to the other. Often, such steps are going from a more ''high-level'' representation (closer to the source code) to a more ''low-level'' representation. Such transformation steps are known as flattening or lowering steps, and tend to be more-or-less irreversible. An example of a lowering step is the transformation from ASTs to the GOTO IR, given above. Verification Condition In the CPROVER framework, the term verification condition is used in a somewhat non-standard way. Let a program and a set of assertions be given. We transform the program into an (acyclic) SSA (i.e., an SSA with all loops unrolled a finite number of times) and turn it into a logical formula, as described above. Note that in this case, the formula will also contain information about what the program does after the assertion is reached: this part of the formula, is, in fact, irrelevant for deciding whether the program can satisfy the assertion or not. The verification condition is the part of the formula that only covers the program execution until the line that checks the assertion has been executed, with everything that comes after it removed.
2018-11-16 04:20:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6098661422729492, "perplexity": 839.5836202392373}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742970.12/warc/CC-MAIN-20181116025123-20181116051123-00502.warc.gz"}
https://scicomp.stackexchange.com/questions/7674/a-simple-question-about-1d-finite-element-derivatives
# A simple question about 1D finite element derivatives For 1D derivative we have $$F(x) = \frac{\partial f(x)}{\partial x}$$ $$f(x)=\sum_{i}f_ie_i(x)$$ $$F(x)=\sum_{i}F_ie_i(x)$$ where $e_i(x)$ are the FEM basis functions. We can then apply the Galerkin procedure to the derivative,and then get the matrix form $$A\widehat{F}=B\widehat{f}$$ $$\widehat{F}=A^{-1}B\widehat{f}$$ $\widehat{F}$and$\widehat{f}$are the vector form of $F_i$and $f_i$. For the derivative of the product of two functions $f(x)$and $g(x)$,applying product rule we get $$\frac{\partial (f(x).g(x))}{\partial x}=f(x). \frac{\partial g(x)}{\partial x}+g(x). \frac{\partial f(x)}{\partial x}$$ Now in FEM I want the derivative matrix $C=A^{-1}B$ also to fulfill the similar constraint $$C \widehat{fg} = [f]_{dig}.C\widehat{g} + [g]_{dig}.C\widehat{f}$$ where $[f]_{dig}$ means diagonal matrix with the values $f_i$ in it. Does any body know what kinds of basis function $e_i(x)$ I can chose?
2019-11-12 08:58:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 7, "x-ck12": 0, "texerror": 0, "math_score": 0.8359179496765137, "perplexity": 290.8984820741445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664808.68/warc/CC-MAIN-20191112074214-20191112102214-00549.warc.gz"}
http://sage.math.gordon.edu/home/pub/41/
# MAT 338 Day 36 2011 ## 3263 days ago by kcrisman Recall that last time we were finishing up discussing long-term average values of certain arithmetic functions.  We had that • The average value of $\tau(n)$ was $\ln(n)+2\gamma-1$. • The average value of $\sigma(n)$ was $\left(\frac{1}{2}\sum_{d=1}^\infty \frac{1}{d^2}\right)\; n$. Because of Euler's amazing solution to the Basel problem, we know that $$\sum_{d=1}^\infty \frac{1}{d^2}=\frac{\pi^2}{6}$$ so the constant in question is $\frac{\pi^2}{12}$. We will discuss this computation again soon, when we return to the connection between number theory and such abstract series. We ended with the question of yet another average value - that of the $\phi$ function.  You can try out various ideas below.  However, we aren't ready to prove anything about that quite yet. def L(n): ls = [] out = 0 for i in range(1,n+1): out += euler_phi(i) ls.append((i,out/i)) return ls P = line(L(100))
2020-04-03 11:47:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5628151893615723, "perplexity": 1077.8247096545804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510846.12/warc/CC-MAIN-20200403092656-20200403122656-00025.warc.gz"}
https://matlabhelper.com/blog/matlab/nonlinear-constrained-optimization-using-matlabs-fmincon/
# Nonlinear constrained optimization using MATLAB’s fmincon May 20, 2022 lang: en_US Since most practical engineering design problems are nonlinear, applying nonlinear programming techniques is paramount. This blog applies both graphical and numerical methods to obtain the optimal solution. The focus here will be on optimization using the advanced sequential quadratic programming (SQP) algorithm of MATLAB's fmincon solver. The theory behind Karush-Kuhn-Tucker's conditions for optimality in the cases of equality and inequality constraints is discussed. ## Description and foundation of nonlinear optimization This blog deals with an optimization problem with multiple design variables. There are different methods of solving multivariate problems, as discussed below. Unconstrained optimization problems: 1. Zeroth order: Simplex search method, pattern search method 2. First-order: Steepest descent, Conjugate gradient 3. Second-order: Newton's method, Quasi-Newton's method, Line-search method Constrained optimization problems: 1. Elimination method 2. Penalty methods 3. Karush-Kuhn-Tucker(KKT) conditions 4. Sequential linear programming This blog deals with solving by the Lagrange multiplier method with KKT conditions using the sequential quadratic programming algorithm(SQP) approach. ## 1st and 2nd order optimality conditions Constrained optimization problems can be reformulated as unconstrained optimization problems. Its derivatives can obtain the local optima of an objective function, and the optimality conditions involve the gradient vector and Hessian matrices of the objective functions. Given that a function f(x) is continuous and has first and second derivatives that are continuous, it can be expressed as a Taylor Series expansion and neglecting higher-order terms as, with being the local optimum. Therefore, if is a local minimum then, Therefore, the first order necessary condition that needs to be satisfied by the local minimum is, which makes the equation as, which means that is positive-semidefinite (2nd order necessary condition) However an additional 2nd sufficient condition needs to be satisfied to guarantee a local minimum at which requires that the Hessian of f(x) is positive definite at the point where and hence, Maximum and minimum Local and global maxima and minima ## Formulation of the optimization problem ### Formulation of the NLP problem A nonlinear programming problem can have a linear or nonlinear objective function with linear and/or nonlinear constraints. The NLP solver in MATLAB uses the formulation as shown below – where C(X) is a vector-valued function with all the non-linear inequality constraints is a vector-valued function with all the non-linear equality constraints Such that NLP problems are iterative, which implies that it starts with an initial guess as a starting point for what the optimum might be. The solver iteratively goes through from the starting point following the gradient of the objective function and the constraints to reach the point where the gradient is equal to zero. Another issue commonly encountered in nonlinear optimization is the non-convexity of the function. This implies that the solver may not be able to find the global minimum and instead finds one of many local minima. Therefore, the choice of starting point will significantly impact the solution found, and this problem will be computationally expensive for solvers, especially when the problem involves more than two variables. Additionally, the constraints can also be non-convex. Convex and concave functions ### Newton's method In this method, the objective function is approximated by a quadratic function at each iteration, and this approximation is then minimized, and a descent direction is computed. The quadratic approximation of the function is given by, By the 1st order necessary condition, , which gives where is the inverse of the Hessian and is the descent direction from which is found as, Feasible Region with constraint with gradient of f and gradient of g ## Nonlinear problem with equality constraints - Karush-Kuhn-Tucker(KKT) conditions Suppose we have a function to be minimized as s.t This problem can be converted from a constrained problem to an unconstrained problem using the Lagrange function is given by, The above-stated equation is the KKT condition for stationarity. The constants are the Lagrange multipliers. Suppose that the minimum for the unconstrained problem is x* , and x* satisfies the constraint equation. For all values of x which satisfy the constraint equation, the minimum of    is the minimum of f(x) subject to the constraint equation. KKT conditions for example constraints The gradient of the objective function is antiparallel to the gradient of the constraint function Gradient of the objective function is antiparallel to the gradient of the constraint function Then the equations and are solved. is the KKT condition for primal feasibility. When there are no inequality constraints, the KKT conditions turn into the Lagrange conditions and the specified parameters are the Lagrange multipliers ## Nonlinear problem with inequality constraints - KKT conditions Suppose we have a function to be minimized as s.t KKT Dual feasibility condition: for all KKT Complementary slackness condition: for all For the case of inequality constraints, it is important to note that there are sign restrictions on the Lagrange multipliers as described above The Lagrangian can also be written for multiple constraints consisting of equality and inequality constraints as - Example of active constraint Example of inactive constraint ## Equality-constrained Sequential Quadratic Programming problem Sequential quadratic programming is one of the algorithms used to solve nonlinear constrained optimization problems by converting the problem into a sequence of quadratic program sub-problems. To get a linear system of equations applying the KKT conditions, it is necessary to have a quadratic objective function and linear constraint functions. This is what you would call a quadratic programming problem. This involves approximating the problem with a quadratic objective and linear constraints and iteratively solving by updating the Hessian, gradient, and Jacobian. For a problem with equality constraints, the KKT conditions require that at the optimal solution. The steps and are obtained with Newton’s method for solving the unconstrained optimization as – - (1) Using the solution for and , the next iteration step of and are obtained as The 1st order optimality condition that needs to be satisfied is represented by the equation (1) above for the following optimization problem– s.t At each iteration, a quadratic program sub-problem needs to be solved to find the steps which will be necessary to update x_{1}[/latex]= 1, = 2, = 1, = 2 indicates that constraint-1 is active and constraint-2 is inactiveIn MATLAB, the objective function is coded into a separate file where it takes input as the vector X containing and , and the output would be the value of the objective function. Vectors are created defining the lower and upper bounds, and the constraints are specified as empty if there are none. There must be a separate file to specify the nonlinear constraints, a function of X containing the input variables. ## Inequality-constrained Sequential Quadratic Programming problem with the example For example minimise – s.t It is necessary to introduce slack variables to convert inequality constraints to equality constraints as follows – For the second constraint, The KKT inequality constraints would include the complementary slackness condition which in the general form is given by, for all i = 1..m The Lagrangian for this problem is given by – Then, the partial derivatives of the Lagrangian with respect to each of these variables are taken and set to be zero as – - (1) - (2) - (3) - (4) – (5) – (6) The last two equations are complementary slackness conditions. These indicate that the is zero or s is zero. If   is zero, then the constraint-1 is active. If is zero, then the constraint-1 is inactive. This implies that the constraint is either active or inactive. This gives four possibilities, and it is necessary to test each of the possibilities. Case 1: = = 0 Equation 2 is violated. Case 2: = 0, = 0, • Gives negative Lagrange multipliers Case 3: = 0, = 0, • gives negative Lagrange multipliers Case 4: = 0, = 0, Gives a valid solution as, = 1, = 2, = 1, = 2 indicates that constraint-1 is active and constraint-2 is inactive ## MATLAB Implementation of nonlinear constrained optimization with equality and inequality constraints Codes & Report! Learn how to solve nonlinearities in modeling practical objective functions and constraints using the fmincon solver and graphically interpret the optimal solutions; Developed in MATLAB R2022a with Optimization Toolbox ### i. Code for equality constrained SQP nonlinear optimization problem %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% main_optimization_equality.m % This script contains the graphical as well as the numerical solution for the % non-linear optimization problem with linear/non-linear equality constraints % Topic: Non-linear constrained optimization with MATLAB's fmincon % function(main function) % Website: https://matlabhelper.com % Date: 09-04-2022 clc clear all close all global fcount; lb = []; ub = []; % Creates a mesh grid with variables x1 and x2 [x1, x2] = meshgrid(-5:.01:25,-5:.01:25); % Evaluates the objective function on the grid z = (x1.^2) - 8*x1 + x2.^2 - 12.*x2 ; (Rest of the code to set constraints, bounds graphically to determine the feasible region and find the optimal solution with fmincon. Two m-files for the objective function and the constraint function are created) opts = optimoptions(@fmincon,'Display','iter-detailed','Algorithm','sqp' [x,fval] = fmincon(@objfun_equality,x0,A,b,Aeq,beq,lb,ub,'confun_equality',opts); ### ii. Code for inequality constrained SQP nonlinear optimization problem %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% main_optimization_inequality.m % This script contains the graphical as well as the numerical solution for the % nonlinear optimization problem with linear and/or nonlinear inequality constraints % Topic: Non-linear constrained optimization with MATLAB's fmincon % function(main function) % Website: https://matlabhelper.com % Date: 09-04-2022 clc clear all close all global fcount; lb = []; ub = []; % Creates a mesh grid with variables x1 and x2 [x1, x2] = meshgrid(-5:.01:5,-5:.01:5); % Evaluates the objective function on the grid z = (x1.^2) - x2 ; (Rest of the code to set constraints, bounds graphically to determine the feasible region and find the optimal solution with fmincon. Two m-files for the objective function and the constraint function are created) opts = optimoptions(@fmincon,'Display','iter-detailed','Algorithm','sqp') [x,fval] = fmincon(@objfun_inequality,x0,A,b,Aeq,beq,lb,ub,'confun_inequality',opts); ## Output ### i. Nonlinear equality constrained problem with SQP algorithm in fmincon function The variables , and f(X) are plotted as variables x, y and z respectively. The plot of the objective function vs and without the constraints is as follows – Unconstrained objective function (equality constrained) Upon including the constraints, the feasible region of the solution is obtained as shown here in the contour plot – Contour plot of the objective function with equality constraint It is observed that the graphical solution to the minimum of the objective function is the same as that obtained using the fmincon solver with the initial guess as So, in this case, the optimum is at The non-linear problem solution lies on feasible region defined by the equality constraint. ### ii. Nonlinear inequality constrained problem with SQP algorithm in fmincon function The plot of the objective function vs and without the constraints is as follows – Unconstrained objective function(inequality constrained problem) Upon including the constraints, the feasible region of the solution is obtained, as shown here Feasible region (inequality constrained problem) The contour plot of the objective function with the constraints is as shown, Contour plot of objective with inequality constraints It is observed that the graphical solution to the minimum of the objective function is the same as that obtained using the fmincon solver with the initial guess as So, in this case, the optimum is at The non-linear problem solution lies within the feasible region. Graphical solution(Unconstrained optimization problem) ## Conclusion We have seen how to solve the nonlinear optimization problem by taking the case of equality constraints and inequality constraints separately. The theory of Karush-Kuhn-Tucker conditions has been presented to obtain insights into how the constrained nonlinear optimization problems are solved. The basics of the SQP algorithm used by MATLAB's fmincon solver to solve these problems have been discussed. Did you find some helpful content from our video or article and now looking for its code, model, or application? You can purchase the specific Title, if available, and instantly get the download link. If you are looking for free help, you can post your comment below & wait for any community member to respond, which is not guaranteed. You can book Expert Help, a paid service, and get assistance in your requirement. If your timeline allows, we recommend you book the Research Assistance plan. If you want to get trained in MATLAB or Simulink, you may join one of our training modules. If you are ready for the paid service, share your requirement with necessary attachments & inform us about any Service preference along with the timeline. Once evaluated, we will revert to you with more details and the next suggested step. Education is our future. MATLAB is our feature. Happy MATLABing! MATLAB Developer at MATLAB Helper, M.S in Telecommunications and Networking, M.S in Physics • Gudaru Venkaiah Katuri says: good and useful information #### MATLAB Helper ® Use Website Chat or WhatsApp at +91-8104622179
2022-11-26 09:34:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7471622228622437, "perplexity": 834.3424413308427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446706285.92/warc/CC-MAIN-20221126080725-20221126110725-00865.warc.gz"}
https://www.neetprep.com/question/52060-Figure-shows-variation-resistance-reactance-versus-angularfrequency-Identify-curve-corresponds-inductive-reactance-andresistanceb-Show-series-LCR-circuit-resonance-behaves-purely-resistivecircuitCompare-phase-relation-current-voltage-series-LCR-circuit-fori-XL--XC-ii-XL--XC-using-phasor-diagramsc-acceptor-circuit-used/126-Physics--Alternating-Current/697-Alternating-Current
NEET Physics Alternating Current Questions Solved (a) Figure shows the variation of resistance and reactance versus angular frequency. Identify the curve which corresponds to inductive reactance and resistance. (b) Show that series LCR circuit at resonance behaves as a purely resistive circuit. Compare the phase relation between current and voltage in series LCR circuit for (i) XL > XC (ii) XL = XC using phasor diagrams. c) What is an acceptor circuit and where it is used? (1+3+1)
2019-12-12 14:10:11
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8149034976959229, "perplexity": 1848.6515610473987}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540543850.90/warc/CC-MAIN-20191212130009-20191212154009-00321.warc.gz"}
https://beacabdan.wordpress.com/2017/01/02/game-theory/
# GAME THEORY This week I read about Game Theory in Morton D. Davis’ book Game Theory: A Nontechnical Introduction. I stumbled upon this book in the City Library while I was looking for a novel to read during the holidays. What I found really interesting about Game Theory is the relationship between comprehensible strategy games and complex real-life dynamics: it has many applications in several fields including Economics, Political Science, Biology… not to mention Artificial Intelligence. There are many cool examples in the book and no prior knowledge is really necessary to be able to (at least) grasp the basic concepts. In fact, many decisions in everyday life are taken using intuitive theoretical game models (also know as common sense). The field of Game Theory was born in 1928 when a twenty-four-year-old John von Neumann, a Hungarian-American scientist, proved the minimax theorem in his paper Zur Theorie der Gesellschaftsspiele (Theory of Board Games). The idea behind minimax is that each player tries to maximise his minimum gain and minimise his maximum loss. This is used in two-persons zero-sum games to make decisions. In a minimax setting: 1. Each player tries to maximise his score, $V$. • The strategy of each player guarantees him a gain of at least $V$. 2. Each player tries to minimise his loss. • The strategy of each player guarantees him a loss of at most $V$. 3. The sign of the value of the game, $V$, represents the outcome. Since it is a zero-sum game, the above is the same as saying that each player tries to minimise the maximum gain of the other player. And this is exactly what Game Theory is about: analysing the goals and consequences of actions and making decisions accordingly. Game Theory provides some tools to reach the best possible solution even with some unpredictable (random) circumstances. In Game Theory: • A game is a situation that can be modelled as a Game Theory problem. • A player is an agent or set of agents that have the same goal in the game: a chess player, a football team, a company, or a country, to name a few. • A strategy is a decision or set of decisions made by a player: his action plan. • Strategies are not necessarily good strategies. • Random strategies might be really useful. • Decisions translate into actions. • A reward or penalty is the payoff of the game for a player. As an example, a strategy for tic tac toe (the only game I mastered so far) would be: “Place a cross in cell A; if the adversary places a circle in cell B, then place a cross in cell D; if he places a circle on C, instead, place a cross in cell E. Then, if he does F, answer with G…”. Of course, real strategy games are very different from one another; they can be too complicated to explicitly write a strategy, rewards might be conditioned by unknown variables, actions can be non-deterministic, players might choose to cooperate or compete or even try to surprise the adversary, etc. All games, though, have one thing in common: while the players act on their environment to achieve their goals, other players are also changing the very same environment.
2017-10-20 16:16:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5884342789649963, "perplexity": 1071.8842577654218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824226.31/warc/CC-MAIN-20171020154441-20171020174441-00751.warc.gz"}
https://zbmath.org/?q=ci%3A5234058
## Found 8 Documents (Results 1–8) 100 MathJax ### Irregular behaviour of class numbers and Euler-Kronecker constants of cyclotomic fields: the log log log devil at play. (English)Zbl 1451.11117 Pintz, János (ed.) et al., Irregularities in the distribution of prime numbers. From the era of Helmut Maier’s matrix method and beyond. Cham: Springer. 143-163 (2018). MSC:  11R18 11R29 11R42 Full Text: Full Text: ### Values of the Euler $$\varphi$$-function not divisible by a given odd prime, and the distribution of Euler-Kronecker constants for cyclotomic fields. (English)Zbl 1294.11164 MSC:  11N37 11Y60 Full Text: ### Logarithmic derivatives of Artin $$L$$-functions. (English)Zbl 1341.11064 MSC:  11R42 11M41 Full Text: Full Text: ### Sum of Euler-Kronecker constants over consecutive cyclotomic fields. (English)Zbl 1282.11143 MSC:  11R18 11R42 Full Text: ### Asymptotic behaviour of the Euler-Kronecker constant. (English)Zbl 1185.11070 Ginzburg, Victor (ed.), Algebraic geometry and number theory. In Honor of Vladimir Drinfeld’s 50th birthday. Basel: Birkhäuser (ISBN 978-0-8176-4471-0/hbk). Progress in Mathematics 253, 453-458 (2006). MSC:  11R42 11R47 Full Text: all top 5
2022-12-04 07:50:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.83417809009552, "perplexity": 5588.686625688533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710968.29/warc/CC-MAIN-20221204072040-20221204102040-00714.warc.gz"}
https://math.stackexchange.com/questions/451312/how-many-special-right-triangles-are-there
# How many special right triangles are there? We all learned in school about "special" right triangles. Special right triangles have integer side lengths. Examples include the $3$-$4$-$5$ right triangle, the $5$-$12$-$13$ right triangle, the $8$-$15$-$17$ right triangle, and their scalar multiples ($6$-$8$-$10$, $10$-$24$-$26$, $16$-$30$-$34$, etc). How many are there? Is there a limit to the number of lowest-form (no scalar multiple) special right triangles? Are there any patterns that arise from the progression of integer side lengths? Choose your favorite positive integers $m$ and $n$ with $m>n$. Then set $a=m^2-n^2$, $b=2mn$ and $c=m^2+n^2$. You will see then that we have the equation $a^2+b^2=c^2$. Now, it just so happens that if you choose $m$ and $n$ so that they share no common factors, and such that $m-n$ is odd, then $a,b$, and $c$ also share no common factors. Can you see why this is true? (In fact, $a$ and $b$ won't even share any common factors themselves). It turns out that every possible lowest-form triangle is derived from some appropriate choice of $m$ and $n$. This answers your first question: there are infinitely many 'lowest form' triangles (called primitive triplets). It also partially answers your second question: the integer side length of one of the legs of a primitive special triangle is always the difference of two squares. • On the other hand, there are no such solutions for exponents >=3. Apparently, some fellow by the name Andrew Wiles has discovered a truly marvelous proof of this theorem which this comment is too small to contain... – Euro Micelli Jul 25 '13 at 2:17 Such triangles can be generally constructed as follows: $$\{n^2-1,2n,n^2+1\},n\in\mathbb{N}\land{n}\ge2$$ In other words, there is an infinite number of such triples. This goes back to Diophantus. Let $a,b,c$ be positive integers such that $a^2+b^2=c^2$. Then $(a,b,c)$ is a Pythagorean triple. If $p$ is a prime common divisor of $a$ and $b$, then $p$ also divides $c$: if $a=pA$ and $b=pB$, then $c^2=p^2(A^2+B^2)$, so that $p$ divides $c^2$, hence $c$. Similarly, if $a$ and $c$ have a common prime divisor, this prime divides also $b$. Thus we can assume that $a$ and $b$ are coprime, by factoring out all prime common divisors. Such a triple is primitive. Next we can show that $a$ and $b$ are of different parity: one is odd and the other is even. They can't be both even, because they are coprime. If they were both odd, we could write $a=2A+1$ and $b=2B+1$, so $$a^2+b^2=4(A^2+A+B^2+B)+2=c^2.$$ This is impossible, because $c$ must be even, so $c=2C$ and we'd get $$2=4(A^2+A+B^2+B-C^2$$ which is clearly impossible. Assume, without loss of generality, that $a$ is odd and $b$ is even. Then we can write $b=2B$ and $$B^2=\frac{c+a}{2}\frac{c-a}{2}.$$ Let $p$ be a prime dividing both $(c+a)/2$ and $(c-a)/2$. Then $p$ divides the sum, which is $c$, and the difference, which is $a$: absurd. Therefore $(c+a)/2$ and $(c-a)/2$ are coprime. Since their product is a square, both must be squares. Thus $$\frac{c+a}{2}=u^2,\quad\frac{c-a}{2}=v^2$$ from which we derive $$a=u^2-v^2,\quad b=2uv,\quad c=u^2+v^2.$$ Moreover $u$ and $v$ are coprime; they can't be both odd, because otherwise $a$ would be even. Conversely, any pair $(u,v)$ of coprime positive integers, one odd and the other even, with $u>v$, gives rise to a primitive Pythagorean triple. For $u=2$, $v=1$ we get the triple $(3,4,5)$; for $u=3$, $v=2$ we get the next one $(5,12,13)$ and so on.
2019-05-25 23:42:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8504787683486938, "perplexity": 115.05365685327921}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258453.85/warc/CC-MAIN-20190525224929-20190526010929-00053.warc.gz"}
http://ecoursesonline.iasri.res.in/mod/page/view.php?id=125247
LESSON 30. Stability Analysis of Gravity Dams: Stability 30.1 Introduction: In this lesion we will learn two important safety requirements viz (i) stability against overturning and (ii) stability against sliding. Safety against induced stresses will be discussed in the next lesson. Cross-section of a typical gravity dam with all relevant forces is shown in Figure 30.1. Fig. 30.1. where, W = Self weight of the gravity dam. It acts at a distance x1  from the  vertical line passing through the toe of the dam. FV = Weight of water on the inclined part of the upstream face. It acts at a distance x2  from the  vertical line passing through the toe of the dam. FH = ${1 \over 2}{\gamma _w}h_u^2$ = horizontal water pressure on upstream face. It acts at a distance y from the base of the dam. U =  ${1 \over 2}{\gamma _w}\left( {{h_d} + {h_u}} \right)b$  Uplift force. It acts at a distance x3  from the  vertical line passing through the toe of the dam. R = Resultant of W, FV, FH  and U. Rx and Ry are the components of R. Force due to horizontal water pressure (FH) and uplift pressure (U) will cause overturning moment about the toe of the dam. This overturning moment will be stabilized mostly by the self weight of the dam W. Weight of water on the inclined part of the upstream face FV will also produce some stabilizing moment. A gravity dam is considered to be safe against overturning if the stabilizing moment is higher than the overturning moment. The factor of safety against overturning is defined as, $FOS{\rm{ against overturning }}={\rm{ }}{{{\rm{Total stabilizing moment}}} \over {{\rm{Total overturning moment}}}}$ The horizontal force acting on the dam is balanced either by friction alone or by friction and shear strength of the joint. A dam will fail in sliding at its base, or at any other level, if the net horizontal force causing sliding is more than the resistance available at that level. The factor of safety against sliding is defined as, Where, μ = Coefficient of friction tc = Permissible cohesion/ shear stress A = cross-sectional area Ff = partial safety factor in friction (Table 1 in IS:6512 – 1984) Fc = partial safety factor in cohesion/shear (Table 1 in IS:6512 – 1984)
2019-06-24 19:51:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5353294014930725, "perplexity": 2527.409071501022}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999709.4/warc/CC-MAIN-20190624191239-20190624213239-00461.warc.gz"}
https://math.stackexchange.com/questions/3003660/a-uniformly-cauchy-sequence-of-functions-is-uniformly-convergent-proof
A uniformly Cauchy sequence of functions is uniformly convergent proof Let $$f_n:A\subseteq\mathbb{R}\to\mathbb{R}$$ $$f_n$$ is uniformly Cauchy $$\implies$$ $$\exists f:A\to\mathbb{R}$$ : $$f_n\xrightarrow{u}f$$ in A proof. $$\forall\varepsilon>0$$ $$\exists\nu$$ : $$\forall n,m>\nu$$ $$\sup_{x\in A}|f_n(x)-f_m(x)|<\varepsilon$$ I don't know why we start like that, I know by definition that a uniformly Cauchy is the second step of this proof $$\implies \forall\varepsilon>0$$ $$\exists\nu\in\mathbb{N}$$ : $$\forall n,m>\nu$$ $$\forall x\in A$$ $$|f_n(x)-f_m(x)|<\varepsilon$$ Where is sup now? So if $$m\to+\infty$$ $$\implies \forall\varepsilon>0$$ $$\exists\nu\in\mathbb{N}$$ : $$\forall n>\nu$$ $$\forall x\in A$$ $$|f_n(x)-f(x)|\leq\varepsilon$$ Ok it's clear why m disappears but I don't understand why $$<\varepsilon$$ becomes $$\leq\varepsilon$$ $$\implies \forall\varepsilon>0$$ $$\exists\nu\in\mathbb{N}$$ : $$\forall n>\nu$$ $$\sup_{x\in A}|f_n(x)-f(x)|\leq\varepsilon$$ This is not clear, where does sup come from? $$\implies\lim_{n\to\infty}sup_{x\in A}|f_n(x)-f(x)|=0$$ This is directly from the definition of limit, and this means that $$\implies f_n\xrightarrow{u}f$$ in $$A$$ • Please consider accepting my answer if it has helped :) – user667 Nov 19 '18 at 14:11 Recall the definition of supremum (it is the least upper bound). Thus, if we can choose $$v$$ such that $$\sup_{x\in A}|f_n(x)-f_m(x)|<\epsilon$$ whenever $$n,m>v$$ then we are guaranteed that $$\forall x\in A, |f_n(x)-f_m(x)|<\epsilon$$ whenever $$n,m>v$$. Going the other way around, strict inequality could become non strict since the supremum is an upper bound itself. Thus if $$|f_n(x)-f(x)|<\epsilon$$ for all $$x\in A$$, we have that $$\epsilon$$ is an upper bound over all $$x$$. In particular, it could be the least upper bound (supremum). Thus we have to write $$\sup_{x \in A}|f_n(x)-f|\leq\epsilon$$ since equality could hold. I agree that this proof is somewhat unclear. To be completely rigorous, we use the triangle inequality. Choose arbitrary $$x \in A$$. We have, $$|f_n(x)-f(x)|\leq|f_n(x)-f_m(x)|+|f_m(x)-f(x)|$$where $$f(x)$$ is the pointwise limit of the sequence $$\{f_n(x)\}$$. We know that there is a pointwise limit because $$\mathbb{R}$$ is complete. By definition of pointwise convergence, there exists some $$N_1$$ such that if $$m>N_1$$ we have $$|f_m(x)-f(x)|<\frac{\epsilon}{2}$$. By assumption, we can choose $$N$$ such that if $$n,m>N$$ we have $$|f_n(x)-f_m(x)|<\frac{\epsilon}{2}$$. Now set $$m>\max\{N_1,N\}$$ (this step is a rigorous way of saying $$m \rightarrow \infty$$). Putting it all together $$|f_n(x)-f(x)|\leq|f_n(x)-f_m(x)|+|f_m(x)-f(x)|<\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon$$Because $$x$$ was an arbitrary point of $$A$$, we have found an $$N$$ (independent of $$x\in A$$) such that, $$\forall x\in A, n>N, |f_n(x)-f(x)|<\epsilon$$
2019-06-26 16:22:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 58, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9883684515953064, "perplexity": 107.55614671489792}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000367.74/warc/CC-MAIN-20190626154459-20190626180459-00047.warc.gz"}
http://www.itensor.org/support/1639/different-boson-number-cutoffs-for-different-sites?show=1662
Different boson number cutoffs for different sites 0 votes asked I have a system with two different types of degrees of freedom - one is high energy, and the other low energy. Therefore a good description for the low energy eigenstates of the system can probably be obtained by only keeping a few basis states for the high energy degrees of freedom, but keeping more states for the low energy degrees of freedom. I'm wondering how to implement such a thing? Here is my existing custom class. I'm envisioning having adim and edim, as opposed to just dim, and then constructing new operators based on those dimensions. Thanks! namespace itensor { class ExcitonSite; using Exciton = BasicSiteSet<ExcitonSite>; class ExcitonSite { IQIndex s; int dim; public: ExcitonSite() { } ExcitonSite(IQIndex I) : s(I) { } ExcitonSite(int n, Args const& args = Args::global()) { dim = args.getInt("on_site_dim", 5); auto v = stdx::reserve_vector<IndexQN>(2 * dim + 1); for (int j = -dim; j <= dim; ++j) { auto i = Index(std::to_string(j), 1, Site); auto q = QN("Sz=", j); v.emplace_back(i, q); } s = IQIndex(nameint("site=",n), std::move(v), Out, 0); } IQIndex index() const { return s; } IQIndexVal state(std::string const& state) { int j = -dim; while (j <= dim) { j++; if (state == std::to_string(j)) { return s(j + 1 + dim); break; } } if (j == dim+1) { Error("State " + state + " not recognized"); } return IQIndexVal{}; } IQTensor op(std::string const& opname, Args const& args) const { auto sP = prime(s); auto Op = IQTensor(dag(s),sP); if (opname == "n") { for (int j = -dim; j<= dim; ++j) { Op.set(s(j + 1 + dim), sP(j + 1 + dim), (float)j); } } else if (opname == "nsq") { for (int j = -dim; j<= dim; ++j) { Op.set(s(j + 1 + dim), sP(j + 1 + dim), (float)(j * j)); } } else if (opname == "gp") { for (int j = -dim; j<= dim - 1; ++j) { Op.set(s(j + 1 + dim), sP(j + 2 + dim), +1.0); } } else if (opname == "igp") { for (int j = -dim; j<= dim-1; ++j) { Op.set(s(j + 1 + dim), sP(j + 2 + dim), +1.0_i); } } else if (opname == "gm") { for (int j = -dim + 1; j<= dim; ++j) { Op.set(s(j + 1 + dim), sP(j + dim), +1.0); } } else if (opname == "igm") { for (int j = -dim + 1; j<= dim; ++j) { Op.set(s(j + 1 + dim), sP(j + dim), +1.0_i); } } else { Error("Operator \"" + opname + "\" name not recognized"); } return Op; } }; } //namespace itensor #endif commented by (70.1k points) Hi, would you be willing to write this code based on version 3.0 of ITensor? I think you'd find the design of site sets and QNs specifically much easier and nicer. It's one of the key things we improved over version 2. commented by (350 points) Gotcha - will look into that thank you. 1 Answer +1 vote answered by (70.1k points) selected by Best answer Just to begin answering this question - it’s a good question. Yes you can definitely create two different types of sites when making your own site set in ITensor. This is easier to do and works better in version 3 of ITensor. Essentially all you have to do is to first follow the procedure for making each type of site separately, using classes like BosonSite as an example, but adjust the dimension of the site index and/or range of the quantum numbers to what you want. Now that you have two different site classes, “AType” and “BType” say, you can make your site set as: using MySiteSet = MixedSiteSet<AType,BType>; which will make the odd-numbered sites be AType sites and the even-numbered sites BType sites. See the sample/mixedspin.cc sample code for an example of this using in action. Alternatively, if the AType and BType sites are similar enough in their design, you could have them be just the same type and control their properties through passing named arguments (Args) and looking at the site number when constructing them. But this could ultimately make things more complicated, since when getting operators you will also have to put in checks about what kind of site you are making operators for. Which of the above two ways to go depends then on the details. Good luck! Miles commented by (350 points) This is very helpful, thank you Miles!
2023-02-06 20:14:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3051435947418213, "perplexity": 6778.6081664804005}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500357.3/warc/CC-MAIN-20230206181343-20230206211343-00060.warc.gz"}